query
stringlengths 273
149k
| pos
stringlengths 18
667
| idx
int64 0
1.99k
| task_name
stringclasses 1
value |
---|---|---|---|
In order to mimic the human ability of continual acquisition and transfer of knowledge across various tasks, a learning system needs the capability for life-long learning, effectively utilizing the previously acquired skills. As such, the key challenge is to transfer and generalize the knowledge learned from one task to other tasks, avoiding interference from previous knowledge and improving the overall performance. In this paper, within the continual learning paradigm, we introduce a method that effectively forgets the less useful data samples continuously across different tasks. The method uses statistical leverage score information to measure the importance of the data samples in every task and adopts frequent directions approach to enable a life-long learning property. This effectively maintains a constant training size across all tasks. We first provide some mathematical intuition for the method and then demonstrate its effectiveness with experiments on variants of MNIST and CIFAR100 datasets. It is a typical practice to design and optimize machine learning (ML) models to solve a single task. On the other hand, humans, instead of learning over isolated complex tasks, are capable of generalizing and transferring knowledge and skills learned from one task to another. This ability to remember, learn and transfer information across tasks is referred to as lifelong learning or continual learning BID16 BID3 BID11. The major challenge for creating ML models with lifelong learning ability is that they are prone to catastrophic forgetting BID9 BID10. ML models tend to forget the knowledge learned from previous tasks when re-trained on new observations corresponding to a different (but related) task. Specifically when a deep neural network (DNN) is fed with a sequence of tasks, the ability to solve the first task will decline significantly after training on the following tasks. The typical structure of DNNs by design does not possess the capability of preserving previously learned knowledge without interference between tasks or catastrophic forgetting. There have been different approaches proposed to address this issue and they can be broadly categorized in three types: I) Regularization: It constrains or regularizes the model parameters by adding some terms in the loss function that prevent the model from deviating significantly from the parameters important to earlier tasks. Typical algorithms include elastic weight consolidation (EWC) BID4 and continual learning through synaptic intelligence (SynInt) BID19. II) Architectural modification: It revises the model structure successively after each task in order to provide more memory and additional free parameters in the model for new task input. Recent examples in this direction are progressive neural networks BID14 and dynamically expanding networks BID18. III) Memory replay: It stores data samples from previous tasks in a separate memory buffer and retrains the new model based on both the new task input and the memory buffer. Popular algorithms here are gradient episodic memory (GEM) BID8, incremental classifier and representation learning (iCaRL) BID12.Among these approaches, regularization is particularly prone to saturation of learning when the number of tasks is large. The additional / regularization term in the loss function will soon lose its competency when important parameters from different tasks are overlapped too many times. Modifications on network architectures like progressive networks resolve the saturation issue, but do not scale as number and complexity of tasks increase. The scalability problem is also present when using memory replay and often suffer from high computational and memory costs. In this paper, we propose a novel approach to lifelong learning with DNNs that addresses both the learning saturation and high computational complexity issues. In this method, we progressively compresses the input information learned thus far along with the input from current task and form more efficiently condensed data samples. The compression technique is based on the statistical leverage scores measure, and it uses frequent directions idea in order to connect the series of compression steps for a sequence of tasks. Our approach resembles the use of memory replay since it preserves the original input data samples from earlier tasks for further training. However, our method does not require extra memory for training and is cost efficient compared to most memory replay methods. Furthermore, unlike the importance assigned to model specific parameters when using regularization methods like EWC or SynInt, we assign importance to the training data that is relevant in effectively learning new tasks, while forgetting less important information. Before presenting the idea, let's first setup the problem: DISPLAYFORM0..} represent a sequence of tasks, each task consists of n i data samples and each sample has a feature dimension d and an output dimension m, i.e., input A i ∈ R ni×d and true output B i ∈ R ni×m. Here, we assume the feature and output dimensions are fixed for all tasks 1. The goal is to train a DNN over the sequence of tasks and ensure it performs well on all of them. Here, we consider that the network's architecture stays the same and the tasks are received in a sequential manner. Formally, with f representing a DNN, our objective is to minimize the loss 2: DISPLAYFORM1 Under this setup, let's look at some existing models: Online EWC trains f on task (A i, B i) with a loss function containing additional penalty terms min f f (DISPLAYFORM2 j=1 Λ j and each Λ j is defined as the change of important parameters (using Fisher information matrix) in f with respect to the jth task. GEM keeps an extra memory buffer containing data samples from each of the previous tasks M k with k < i, it trains on the current task (A i, B i) with a regular loss func- DISPLAYFORM3, but subject to inequalities on each update of f, DISPLAYFORM4 The new approach OLSS is to find an approximation of A in a streaming manner, i.e., to form an i to approximate DISPLAYFORM5 T such that the inĝ DISPLAYFORM6 is likely to perform on all tasks as good as DISPLAYFORM7 To avoid extra memory and computation cost during the training process, we restrict the approximate i to have the same number of rows as the current task A i.Equation and represent nonlinear least squares problems. It is to be noted that a nonlinear least squares problem can be solved with an approximation deduced from an iteration of linear least squares problems with J T J∆θ = J T ∆B where J is the Jacobian of f at each update (Gauss-Newton Method). Besides this technique, there are various approaches in addressing this problem. Here we adopt a cost effective simple randomization technique -leverage score sampling, which has been used extensively in solving large scale linear least squares and low rank approximation problems BID17 BID0. Definition 1 BID1 ) Given a matrix A ∈ R n×d with n > d, let U denote the n × d matrix consisting of the d left singular vectors of A, and let U (i,:) denote the i-th row of U, then the statistical leverage score of the i-th row of A is defined as U (i,:) 2 2 for i ∈ {1, ..., n}.Statistical leverage scores define the relevant non-uniformity structure of a matrix and a higher score indicates a heavier weight of the row contributing to the non-uniformity of the matrix; it has been widely used for constructing a randomized sketch of a matrix BID1 BID17. In our case, given an input matrix A, we will compute the leverage score of each row, then sample the rows with probability proportional to the scores. Using leverage score sampling, we are able to select the important samples given a dataset. The remaining problem is to embed it in a sequence of tasks. In order to achieve this, we make use of the concept of frequent directions. Frequent directions extends the idea of frequent items in item frequency approximation problem to a matrix BID7 BID2 BID15. Given a matrix A ∈ R n×d whose rows are received one by one and a space parameter, the algorithm considers the first 2 rows in A and shrinks its top orthogonal vectors by the same amount to obtain an × d matrix; then combines them with the next rows in A for the next iteration, repeat the procedure until reaching the final sketch of dimension × d. Frequent directions algorithm is targeted at finding a low rank approximation on a continuously expanding matrix. This is well suited for a continuous stream of data (tasks) within the lifelong learning setting. We present the step by step procedure of performing leverage score sampling together with compression using frequent directions idea in Algorithm 1. In our setting, we append the new task data samples to the existing buffer set and perform leverage score sampling to form a new buffer set and then train on it, this process is repeated for the entire sequence of tasks. Input: A sequence of tasks {FIG1, ..., (A i, B i),...} with A i ∈ R ni×d and B i ∈ R ni×m; initialization of the model parameters; a space parameter i.e., number of samples to pass in the model for training. It can be set as n i or even smaller after receiving the i-th task, which avoids extra memory and computations during training. Output: A trained neural network on a sequence of tasks. Step 1 Initialize a buffer set S = {Â,B} where both andB are empty. Step 2 While the ith task is presented:Step 3 If andB are empty: Step 4 set = A i andB = B i, Step 5 else:Step 6 set =  A i andB = B B i.Step 7 Perform SVD: DISPLAYFORM0 Step 8 Randomly select rows of andB without replacement based on probability U j,:2 2 / U 2 F for j ∈ {1, ..., n i +} (or j ∈ {1, ..., n i} when i = 1) and set them as andB respectively. Step 9Train the model with ∈ R ×d andB ∈ R ×m. When n i is large, the SVD (singular value decomposition) of matrix ∈ R (ni+)×d in Step 6 is computationally expensive, we could use a streaming SVD method to speed up the process if is chosen much smaller than n i. In that case the computational cost for SVD could be reduced from O((n i +)d2 ) to O(log 2 (n i +) d 2 ) (assuming d < < n i). In addition, there exists various efficient ways to approximate the leverage scores BID1 BID13 which would further reduce the computational cost. Remark: A major concern with this algorithm is that leverage scores is a linear measure, i.e., the selected samples capture the important information embedded linearly in the data matrix which may not fully represent the importance of the data samples. Another related issue is that the nonlinear information probably depend on the structure of f, the DNN. As such, there may be some underlying dependency of a data sample's importance on the DNN architecture. We leave this open as a future research direction. We evaluate the performance of the proposed algorithm OLSS on three classification tasks used as benchmarks in related prior work.• Rotated MNIST : a variant of the MNIST dataset of handwriten digits BID6, the digits in each task are rotated by a fixed angle between 0 • to 180•. The experiment is on 20 tasks and each task consists of 60, 000 training and 10, 000 testing samples.• Permutated MNIST BID4: a variant of the MNIST dataset BID6, the digits in each task are transformed by a fixed permutation of pixels. The experiment is on 20 tasks and each task consists of 60, 000 training and 10, 000 testing samples.• Incremental CIFAR100 BID12 BID19: a variant of the CIFAR object recognition dataset with 100 classes BID5 ). The experiment is on 20 tasks and each task consists of 5 classes; each task consists of 2, 500 training and 500 testing samples. Where, each task introduces a new set of classes; for a total number of 20 tasks, each new task concerns examples from a disjoint subset of 5 classes. In the original setting of , a softmax layer is added to the output vector which only allows entries representing the 5 classes in the current task to output values larger than 0. In our setting, we allow the entries representing all the past occurring classes to output values larger than 0. We believe this is a more natural setup for lifelong learning. The DNN used for rotated and permuted MNIST is an MLP with 2 hidden layers and each with 400 units; whereas a ResNet18 is used for the incremental CIFAR100 experiment. We train 5 epochs with batch size 200 on rotated and permuted MNIST datasets and 10 epochs with batch size 100 on incremental CIFAR100. In all experiments we compare the following algorithms: I) A simple SGD predictor, II) EWC BID4, III) GEM and IV) OLSS (ours).In all the algorithms, we use a plain SGD optimizer. All algorithms were implemented based on the publicly available code from the original authors of the GEM paper BID8. The regularization and memory hyper-parameters in EWC and GEM were set as described in BID8. The space parameter for our OLSS algorithm was set to be equal to the number of samples in each task; the learning rate for each algorithm was determined through a grid search on {0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1.0}. Comparing across all the algorithms, we summarize the average test accuracy on the learned tasks in FIG1 (see Appendix Figure 2 for the change in the test accuracy at the first task, as more tasks are learned.) and the computational costs for each algorithm in TAB0. As observed from the figures, across the three benchmarks, OLSS and GEM achieve similar accuracy and significantly outperform both EWC and simple SGD training. Nevertheless, GEM demands much higher computational resources (see TAB0) as the algorithm requires a constraint validation step and a potential gradient projection step to correct for constraint violations across all previously learned tasks during training (see Section 3 in BID8). In detail, for GEM, the time complexity is proportional to the product of the number of samples kept in the memory buffer, the number of parameters in the model and the number of iterations required to converge. In contrast, OLSS requires a SVD (or QR factorization) to compute the leverage scores for each task which can be achieved in a time complexity proportional to the product of the square of the number of features and the number of data samples, and is much less compared to GEM. As observed in Appendix Figure 2, OLSS shows robustness to catastrophic forgetting of the first task with positive backward transfer across all three datasets while learning the remaining sequence of tasks. In the case of rotated and permuted MNIST, OLSS is the most robust method. As presented in Appendix Figure 3, after training on the whole sequence of tasks, both GEM and OLSS are able to preserve the accuracy for most tasks on rotated and permuted MNIST. In contrast, it is hard to preserve the accuracy of the previously trained tasks on CIFAR100 for all algorithms. As we noted earlier, EWC exhibits a saturation issue when the number of tasks increases. This may hold for most regularization methods in order to achieve continual learning, as they target constraining the model parameters successively, thereby limiting the model capacity. We presented a new approach in addressing the lifelong learning problem with deep neural networks. It is inspired by the randomization and compression techniques typically used in statistical analysis. We combined a simple importance sampling technique -leverage score sampling with the frequent directions concept and developed an online effective forgetting or compression mechanism that enables lifelong learning across a sequence of tasks. Despite its simple structure, the on MNIST and CIFAR100 experiments show its effectiveness as compared to recent state of the art. | A new method uses statistical leverage score information to measure the importance of the data samples in every task and adopts frequent directions approach to enable a life-long learning property. | 600 | scitldr |
Convolutional neural networks (CNNs) are inherently equivariant to translation. Efforts to embed other forms of equivariance have concentrated solely on rotation. We expand the notion of equivariance in CNNs through the Polar Transformer Network (PTN). PTN combines ideas from the Spatial Transformer Network (STN) and canonical coordinate representations. The is a network invariant to translation and equivariant to both rotation and scale. PTN is trained end-to-end and composed of three distinct stages: a polar origin predictor, the newly introduced polar transformer module and a classifier. PTN achieves state-of-the-art on rotated MNIST and the newly introduced SIM2MNIST dataset, an MNIST variation obtained by adding clutter and perturbing digits with translation, rotation and scaling. The ideas of PTN are extensible to 3D which we demonstrate through the Cylindrical Transformer Network. Whether at the global pattern or local feature level BID8, the quest for (in/equi)variant representations is as old as the field of computer vision and pattern recognition itself. State-of-the-art in "hand-crafted" approaches is typified by SIFT . These detector/descriptors identify the intrinsic scale or rotation of a region BID19 BID1 and produce an equivariant descriptor which is normalized for scale and/or rotation invariance. The burden of these methods is in the computation of the orbit (i.e. a sampling the transformation space) which is necessary to achieve equivariance. This motivated steerable filtering which guarantees transformed filter responses can be interpolated from a finite number of filter responses. Steerability was proved for rotations of Gaussian derivatives BID6 and extended to scale and translations in the shiftable pyramid BID31. Use of the orbit and SVD to create a filter basis was proposed by BID26 and in parallel, BID29 proved for certain classes of transformations there exists canonical coordinates where deformation of the input presents as translation of the output. Following this work, BID25 and BID10; Teo & BID33 proposed a methodology for computing the bases of equivariant spaces given the Lie generators of a transformation. and most recently, BID30 proposed the scattering transform which offers representations invariant to translation, scaling, and rotations. The current consensus is representations should be learned not designed. Equivariance to translations by convolution and invariance to local deformations by pooling are now textbook BID17, p.335) but approaches to equivariance of more general deformations are still maturing. The main veins are: Spatial Transformer Network (STN) BID13 which similarly to SIFT learn a canonical pose and produce an invariant representation through warping, work which constrains the structure of convolutional filters BID36 and work which uses the filter orbit BID3 to enforce an equivariance to a specific transformation group. In this paper, we propose the Polar Transformer Network (PTN), which combines the ideas of STN and canonical coordinate representations to achieve equivariance to translations, rotations, and dilations. The three stage network learns to identify the object center then transforms the input into logpolar coordinates. In this coordinate system, planar convolutions correspond to group-convolutions in rotation and scale. PTN produces a representation equivariant to rotations and dilations without http://github.com/daniilidis-group//polar-transformer-networks Figure 1: In the log-polar representation, rotations around the origin become vertical shifts, and dilations around the origin become horizontal shifts. The distance between the yellow and green lines is proportional to the rotation angle/scale factor. Top rows: sequence of rotations, and the corresponding polar images. Bottom rows: sequence of dilations, and the corresponding polar images.the challenging parameter regression of STN. We enlarge the notion of equivariance in CNNs beyond Harmonic Networks BID36 and Group Convolutions BID3 by capturing both rotations and dilations of arbitrary precision. Similar to STN; however, PTN accommodates only global deformations. We present state-of-the-art performance on rotated MNIST and SIM2MNIST, which we introduce. To summarize our contributions:• We develop a CNN architecture capable of learning an image representation invariant to translation and equivariant to rotation and dilation.• We propose the polar transformer module, which performs a differentiable log-polar transform, amenable to backpropagation training. The transform origin is a latent variable.• We show how the polar transform origin can be learned effectively as the centroid of a single channel heatmap predicted by a fully convolutional network. One of the first equivariant feature extraction schemes was proposed by BID25 who suggested the discrete sampling of 2D-rotations of a complex angle modulated filter. About the same time, the image and optical processing community discovered the Mellin transform as a modification of the Fourier transform BID39 BID0. The Fourier-Mellin transform is equivariant to rotation and scale while its modulus is invariant. During the 80's and 90's invariances of integral transforms were developed through methods based in the Lie generators of the respective transforms starting from one-parameter transforms BID5 and generalizing to Abelian subgroups of the affine group BID29.Closely related to the (in/equi)variance work is work in steerability, the interpolation of responses to any group action using the response of a finite filter basis. An exact steerability framework began in BID6, where rotational steerability for Gaussian derivatives was explicitly computed. It was extended to the shiftable pyramid BID31, which handle rotation and scale. A method of approximating steerability by learning a lower dimensional representation of the image deformation from the transformation orbit and the SVD was proposed by BID26.for the largest Abelian subgroup and incrementally steering for the remaining subgroups. Cohen & Welling (2016a); recently combined steerability and learnable filters. The most recent "hand-crafted" approach to equivariant representations is the scattering transform BID30 which composes rotated and dilated wavelets. Similar to SIFT this approach relies on the equivariance of anchor points (e.g. the maxima of filtered responses in (translation) space). Translation invariance is obtained through the modulus operation which is computed after each convolution. The final scattering coefficient is invariant to translations and equivariant to local rotations and scalings. BID15 achieve transformation invariance by pooling feature maps computed over the input orbit, which scales poorly as it requires forward and backward passes for each orbit element. Within the context of CNNs, methods of enforcing equivariance fall to two main veins. In the first, equivariance is obtained by constraining filter structure similarly to Lie generator based approaches BID29 BID10. Harmonic Networks BID36 use filters derived from the complex harmonics achieving both rotational and translational equivariance. The second requires the use of a filter orbit which is itself equivariant to obtain group equivariance. BID3 convolve with the orbit of a learned filter and prove the equivariance of group-convolutions and preservation of rotational equivariance in the presence of rectification and pooling. BID4 process elements of the image orbit individually and use the set of outputs for classification. BID7 produce maps of finite-multiparameter groups, BID38 and BID21 use a rotational filter orbit to produce oriented feature maps and rotationally invariant features, and BID18 propose a transformation layer which acts as a group-convolution by first permuting then transforming by a linear filter. Our approach, PTN, is akin to the second vein. We achieve global rotational equivariance and expand the notion of CNN equivariance to include scaling. PTN employs log-polar coordinates (canonical coordinates in BID29) to achieve rotation-dilation group-convolution through translational convolution subject to the assumption of an image center estimated similarly to the STN. Most related to our method is BID11, which achieves equivariance by warping the inputs to a fixed grid, with no learned parameters. When learning features from 3D objects, invariance to transformations is usually achieved through augmenting the training data with transformed versions of the inputs BID37, or pooling over transformed versions during training and/or test BID22 BID27. BID28 show that a multi-task approach, i.e. prediction of both the orientation and class, improves classification performance. In our extension to 3D object classification, we explicitly learn representations equivariant to rotations around a family of parallel axes by transforming the input to cylindrical coordinates about a predicted axis. This section is divided into two parts, the first offers a review of equivariance and groupconvolutions. The second offers an explicit example of the equivariance of group-convolutions through the 2D similarity transformations group, SIM, comprised of translations, dilations and rotations. Reparameterization of SIM to canonical coordinates allows for the application of the SIM group-convolution using translational convolution. Equivariant representations are highly sought after as they encode both class and deformation information in a predictable way. Let G be a transformation group and L g I be the group action applied to an image I. A mapping Φ: E → F is said to be equivariant to the group action DISPLAYFORM0 where L g and L g correspond to application of g to E and F respectively and satisfy DISPLAYFORM1 Invariance is the special case of equivariance where L g is the identity. In the context of image classification and CNNs, g ∈ G can be thought of as an image deformation and Φ a mapping from the image to a feature map. The inherent translational equivariance of CNNs is independent of the convolutional kernel and evident in the corresponding translation of the output in response to translation of the input. Equivariance to other types of deformations can be achieved through application of the group-convolution, a generalization of translational convolution. Letting f (g) and φ(g) be real valued functions on G DISPLAYFORM2 A slight modification to the definition is necessary in the first CNN layer since the group is acting on the image. The group-convolution reduces to translational convolution when G is translation in R n with addition as the group operator, DISPLAYFORM3 Group-convolution requires integrability over a group and identification of the appropriate measure dg. It can be proved that given the measure dg, group-convolution is always group equivariant: DISPLAYFORM4 This is depicted in response of an equivariant representation to input deformation (Figure 2 (left)). A similarity transformation, ρ ∈ SIM, acts on a point in x ∈ R 2 by DISPLAYFORM0 where SO is the rotation group. To take advantage of the standard planar convolution in classical CNNs we decompose a ρ ∈ SIM into a translation, t in R 2 and a dilated-rotation r in SO×R +.Equivariance to SIM FORMULA2 is achieved by learning the center of the dilated rotation, shifting the original image accordingly then transforming the image to canonical coordinates. In this reparameterization the standard translational convolution is equivalent to the dilated-rotation group-convolution. The origin predictor is an application of STN to global translation prediction BID13, the centroid of the output is taken as the origin of the input. Transformation of the image L t I = I(t − t 0) (canonization in Soatto FORMULA0) reduces the SIM deformation to a dilated-rotation if t o is the true translation. After centering, we perform SO × R + convolutions on the new image I o = I(x − t o): DISPLAYFORM1 and the feature maps f in subsequent layers DISPLAYFORM2 where r, s ∈ SO × R +. We compute this convolution through use of canonical coordinates for Abelian Lie-groups BID29. The centered image I o (x, y) 1 is transformed to logpolar coordinates, I(e ξ cos(θ), e ξ sin(θ)) hereafter written λ(ξ, θ) with (ξ, θ) ∈ SO × R + for Figure 2: Left: Group-convolutions in SO. The images in the left most column differ by 90• rotation, the filters are shown in the top row. Application of the rotational group-convolution with an arbitrary filter is shown to produce an equivariant representation. The inner-product each of filter orbit (rotated from 0 − 360 •) and the image is plotted in blue for the top image and red for the bottom image. Observe how the filter response is shifted by 90•. Right: Group-convolutions in SO × R +. Images in the left most column differ by a rotation of π/4 and scaling of 1.2. Careful consideration of the ing heatmaps (shown in canonical coordinates) reveals a shift corresponding to the deformation of the input image.notational convenience. The shift of the dilated-rotation equivariant representation in response to input deformation is shown in Figure 2 (right) using canonical coordinates. In canonical coordinates s −1 r = ξ r − ξ, θ r − θ and the SO × R + group-convolution 2 can be expressed and efficiently implemented as a planar convolution DISPLAYFORM3 To summarize, we construct a network of translational convolutions, take the centroid of the last layer, shift the original image to accordingly, convert to log-polar coordinates, and apply a second network 3 of translational convolutions. The is a feature map equivariant to dilated-rotations around the origin. PTN is comprised of two main components connected by the polar transformer module. The first part is the polar origin predictor and the second is the classifier (a conventional fully convolutional network). The building block of the network is a 3 × 3 × K convolutional layer followed by batch normalization, an ReLU and occasional subsampling through strided convolution. We will refer to this building block simply as block. Figure 3 shows the architecture. The polar origin predictor operates on the original image and comprises a sequence of blocks followed by a 1 × 1 convolution. The output is a single channel feature map, the centroid of which is taken as the origin of the polar transform. There are some difficulties in training a neural network to predict coordinates in images. Some approaches BID35 attempt to use fully connected layers to directly regress the coordinates with limited success. A better option is to predict heatmaps BID34 BID24, and take their argmax. However, this can be problematic since backpropogation gradients are zero in all but one point, which impedes learning.1 we abuse the notation here and momentarily we use x as the x-coordinate instead of x ∈ R 2. 2 abuse of the term, SO × R + is not a group because the dilation ξ is not compact. 3 the network employs rectifier and pooling which have been shown to preserve equivariance BID3 Figure 3: Network architecture. The input image passes through a fully convolutional network, the polar origin predictor, which outputs a heatmap. The centroid of the heatmap (two coordinates), together with the input image, goes into the polar transformer module, which performs a polar transform with origin at the input coordinates. The obtained polar representation is invariant with respect to the original object location; and rotations and dilations are now shifts, which are handled equivariantly by a conventional classifier CNN.The usual approach to heatmap prediction is evaluation of a loss against some ground truth. In this approach the argmax gradient problem is circumvented by supervision. In PTN the the gradient of the output coordinates must be taken with respect to the heatmap since the polar origin is unknown and must be learned. Use of argmax is avoided by using the centroid of the heatmap as the polar origin. The gradient of the centroid with respect to the heatmap is constant and nonzero for all points, making learning possible. The polar transformer module takes the origin prediction and image as inputs and outputs the logpolar representation of the input. The module uses the same differentiable image sampling technique as STN BID13, which allows output coordinates V i to be expressed in terms of the input U and the source sample point coordinates (x s i, y s i). The log-polar transform in terms of the source sample points and target regular grid (x t i, y t i) is: DISPLAYFORM0 DISPLAYFORM1 where (x 0, y 0) is the origin, W, H are the output width and height, and r is the maximum distance from the origin, set to 0.5 √ H 2 + W 2 in our experiments. To maintain feature map resolution, most CNN implementations use zero-padding. This is not ideal for the polar representation, as it is periodic about the angular axis. A rotation of the input in a vertical shift of the output, wrapping at the boundary; hence, identification of the top and bottom most rows is most appropriate. This is achieved with wrap-around padding on the vertical dimension. The top most row of the feature map is padded using the bottom rows and vice versa. Zero-padding is used in the horizontal dimension. TAB5 shows a performance evaluation. To improve robustness of our method, we augment the polar origin during training time by adding a random shift to the regressed polar origin coordinates. Note that this comes for little computational cost compared to conventional augmentation methods such as rotating the input image. TAB5 quantifies the performance gains of this kind of augmentation. We briefly define the architectures in this section, see A for details. CCNN is a conventional fully convolutional network; PCNN is the same, but applied to polar images with central origin. STN is our implementation of the spatial transformer networks BID13. PTN is our polar transformer networks, and PTN-CNN is a combination of PTN and CCNN. The suffixes S and B indicate small and big networks, according to the number of parameters. The suffixes + and ++ indicate training and training+test rotation augmentation. We perform rotation augmentation for polar-based methods. In theory, the effect of input rotation is just a shift in the corresponding polar image, which should not affect the classifier CNN. In practice, interpolation and angle discretization effects in slightly different polar images for rotated inputs, so even the polar-based methods benefit from this kind of augmentation. TAB1 shows the . We divide the analysis in two parts; on the left, we show approaches with smaller networks and no rotation augmentation, on the right there are no restrictions. Between the restricted approaches, the Harmonic Network BID36 outperforms the PTN by a small margin, but with almost 4x more training time, because the convolutions on complex variables are more costly. Also worth mentioning is the poor performance of the STN with no augmentation, which shows that learning the transformation parameters is much harder than learning the polar origin coordinates. Between the unrestricted approaches, most variants of PTN-B outperform the current state of the art, with significant improvements when combined with CCNN and/or test time augmentation. Finally, we note that the PCNN achieves a relatively high accuracy in this dataset because the digits are mostly centered, so using the polar transform origin as the image center is reasonable. Our method, however, outperforms it by a high margin, showing that even in this case, it is possible to find an origin away from the image center that in a more distinctive representation. FORMULA0 6 Test time performance is 8x slower when using test time augmentation We also perform experiments in other MNIST variants. MNIST R, RTS are replicated from BID13. We introduce SIM2MNIST, with a more challenging set of transformations from SIM. See B for more details about the datasets. TAB2 shows the . We can see that the PTN performance mostly matches the STN on both MNIST R and RTS. The deformations on these datasets are mild and data is plenty, so the performance may be saturated. On SIM2MNIST, however, the deformations are more challenging and the training set 5x smaller. The PCNN performance is significantly lower, which reiterates the importance of predicting the best polar origin. The HNet outperforms the other methods (except the PTN), thanks to its translation and rotation equivariance properties. Our method is more efficient both in number of parameters and training time, and is also equivariant to dilations, achieving the best performance by a large margin. BID13 0 DISPLAYFORM0.28 (0.05) 44k 31.42 TI-Pooling BID15 0.8 DISPLAYFORM1 No augmentation is used with SIM2MNIST, despite the + suffixes 2 Our modified version, with two extra layers with subsampling to account for larger input We visualize network activations to confirm our claims about invariance to translation and equivariance to rotations and dilations. Figure 4 (left) shows some of the predicted polar origins and the of the polar transform. We can see that the network learns to reject clutter and to find a suitable origin for the polar transform, and that the representation after the polar transformer module does present the properties claimed. We proceed to visualize if the properties are preserved in deeper layers. FIG1 (right) shows the activations of selected channels from the last convolutional layer, for different rotations, dilations, and translations of the input. The reader can verify that the equivariance to rotations and dilations, and the invariance to translations are indeed preserved during the sequence of convolutional layers. We extend our model to perform 3D object classification from voxel occupancy grids. We assume that the inputs are transformed by random rotations around an axis from a family of parallel axes. Then, a rotation around that axis corresponds to a translation in cylindrical coordinates. In order to achieve equivariance to rotations, we predict an axis and use it as the origin to transform to cylindrical coordinates. If the axis is parallel to one of the input grid axes, the cylindrical transform amounts to channel-wise polar transforms, where the origin is the same for all channels and each channel is a 2D slice of the 3D voxel grid. In this setting, we can just apply the polar transformer layer to each slice. We use a technique similar to the anisotropic probing of BID27 to predict the axis. Let z denote the input grid axis parallel to the rotation axis. We treat the dimension indexed by z as channels, and run regular 2D convolutional layers, reducing the number of channels on each layer, eventually collapsing to a single 2D heatmap. The heatmap centroid gives one point of the axis, and the direction is parallel to z. In other words, the centroid is the origin of all channel-wise polar transforms. We then proceed with a regular 3D CNN classifier, acting on the cylindrical representation. The 3D convolutions are equivariant to translations; since they act on cylindrical coordinates, the learned representation is equivariant to input rotations around axes parallel to z. We run experiments on ModelNet40 BID37, which contains objects rotated around the gravity direction (z). FIG2 shows examples of input voxel grids and their cylindrical coordinates representation, while table 3 shows the classification performance. To the best of our knowledge, our method outperforms all published voxel-based methods, even with no test time augmentation. However, the multi-view based methods generally outperform the voxel-based. BID27.Note that we could also achieve equivariance to scale by using log-cylindrical or log-spherical coordinates, but none of these change of coordinates would in equivariance to arbitrary 3D rotations. Cylindrical Transformer (Ours) 86.5 89.9 3D ShapeNets BID37 77.3 -VoxNet BID22 83 -MO-SubvolumeSup BID27 86.0 89.2 MO-Aniprobing BID27 85.6 89.9 We have proposed a novel network whose output is invariant to translations and equivariant to the group of dilations/rotations. We have combined the idea of learning the translation (similar to the spatial transformer) but providing equivariance for the scaling and rotation, avoiding, thus, fully connected layers required for the pose regression in the spatial transformer. Equivariance with respect to dilated rotations is achieved by convolution in this group. Such a convolution would require the production of multiple group copies, however, we avoid this by transforming into canonical coordinates. We improve the state of the art performance on rotated MNIST by a large margin, and outperform all other tested methods on a new dataset we call SIM2MNIST. We expect our approach to be applicable to other problems, where the presence of different orientations and scales hinder the performance of conventional CNNs. We implement the following architectures for comparison,• Conventional CNN (CCNN), a fully convolutional network, composed of a sequence of convolutional layers and some rounds of subsampling.• Polar CNN (PCNN), same architecture as CCNN, operating on polar images. The logpolar transform is pre-computed at the image center before training, as in BID11. The fundamental difference between our method and this is that we learn the polar origin implicitly, instead of fixing it.• Spatial Transformer Network (STN), our implementation of BID13, replacing the localization network by four blocks of 20 filters and stride 2, followed by a 20 unit fully connected layer, which we found to perform better. The transformation regressed is in SIM, and a CCNN comes after the transform.• Polar Transformer Network (PTN), our proposed method. The polar origin predictor comprises three blocks of 20 filters each, with stride 2 on the first block (or the first two blocks, when input is 96 × 96). The classification network is the CCNN.• PTN-CNN, we classify based on the sum of the per class scores of instances of PTN and CCNN trained independently. The following suffixes qualify the architectures described above:• S, "small" network, with seven blocks of 20 filters and one round of subsampling (equivalent to the Z2CNN in Cohen & Welling (2016b)).• B, "big" network, with 8 blocks with the following number of filters: 16, 16, 32, 32, 32, 64, 64, 64. Subsampling by strided convolution is used whenever the number of filters increase. We add up to two 2 extra blocks of 16 filters with stride 2 at the beginning to handle larger input resolutions (one for 42 × 42 and two for 96 × 96).• +, training time rotation augmentation by continuous angles.• ++, training and test time rotation augmentation. We input 8 rotated versions the the query image and classify using the sum of the per class scores. The axis prediction part of the cylindrical transformer network is composed of four 2D blocks, with 5 × 5 kernels and 32, 16, 8, and 4 channels, no subsampling. The classifier is composed of eight 3D convolutional blocks, with 3 × 3 × 3 kernels, the following number of filters: 32, 32, 32, 64, 64, 64, 128, 128, and subsampling whenever the number of filters increase. Total number of params is approximately 1M. • Rotated MNIST The rotated MNIST dataset BID16 is composed of 28 × 28, 360• rotated images of handwritten digits. The training, validation and test sets are of sizes 10k, 2k, and 50k, respectively.• MNIST R, we replicate it from BID13. It has 60k training and 10k testing samples, where the digits of the original MNIST are rotated between [−90 DISPLAYFORM0 It is also know as half-rotated MNIST BID15 .• MNIST RTS, we replicate it from BID13 . It has 60k training and 10k testing samples, where the digits of the original MNIST are rotated between [−45 DISPLAYFORM1 •], scaled between 0.7 and 1.2, and shifted within a 42 × 42 black canvas.• SIM2MNIST, we introduce a more challenging dataset, based on MNIST, perturbed by random transformations from SIM. The images are 96 × 96, with 360 • rotations; the scale factors range from 1 to 2.4, and the digits can appear anywhere in the image. The training, validation and test set have size 10k, 5k, and 50k, respectively. In order to demonstrate the efficacy of PTN on real-world RGB images, we run experiments on the Street View House Numbers (SVHN) dataset BID23, and a rotated version that we introduce (ROTSVHN). The dataset contains cropped images of single digits, as well as the slightly larger images from where the digits are cropped. Using the latter, we can extract the rotated digits without introducing artifacts. FIG3 shows some examples from the ROTSVHN.We use a 32 layer Residual Network BID9 as a baseline (ResNet32). The PTN-ResNet32 has 8 residual convolutional layers as the origin predictor, followed by a ResNet32.In contrast with handwritten digits, the 6s and 9s in house numbers are usually indistinguishable. To remove this effect from our analysis, we also run experiments removing those classes from the datasets (which is denoted by appending a minus to the dataset name). TAB4 shows the . The reader will note that rotations cause a significant performance loss on the conventional ResNet; the error increases from 2.09% to 5.39%, even when removing 6s and 9s from the dataset. With PTN, on the other hand, the error goes from 2.85% to 3.96%, which shows our method is more robust to the perturbations, although the performance on the unperturbed datasets is slightly worse. We expect the PTN to be even more advantageous when large scale variations are also present. We quantify the performance boost obtained with wrap around padding, polar origin augmentation, and training time rotation augmentation. Results are based on the PTN-B variant trained on Rotated MNIST. We remove one operation at a time and verify that the performance consistently drops, which indicates that all operations are indeed helpful. TAB5 shows the . | We learn feature maps invariant to translation, and equivariant to rotation and scale. | 601 | scitldr |
Meta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task. Bayesian hierarchical modeling provides a theoretical framework for formalizing meta-learning as inference for a set of parameters that are shared across tasks. Here, we reformulate the model-agnostic meta-learning algorithm (MAML) of as a method for probabilistic inference in a hierarchical Bayesian model. In contrast to prior methods for meta-learning via hierarchical Bayes, MAML is naturally applicable to complex function approximators through its use of a scalable gradient descent procedure for posterior inference. Furthermore, the identification of MAML as hierarchical Bayes provides a way to understand the algorithm’s operation as a meta-learning procedure, as well as an opportunity to make use of computational strategies for efficient inference. We use this opportunity to propose an improvement to the MAML algorithm that makes use of techniques from approximate inference and curvature estimation. A remarkable aspect of human intelligence is the ability to quickly solve a novel problem and to be able to do so even in the face of limited experience in a novel domain. Such fast adaptation is made possible by leveraging prior learning experience in order to improve the efficiency of later learning. This capacity for meta-learning also has the potential to enable an artificially intelligent agent to learn more efficiently in situations with little available data or limited computational resources BID45 BID4 BID37.In machine learning, meta-learning is formulated as the extraction of domain-general information that can act as an inductive bias to improve learning efficiency in novel tasks (; BID52 . This inductive bias has been implemented in various ways: as learned hyperparameters in a hierarchical Bayesian model that regularize task-specific parameters BID18, as a learned metric space in which to group neighbors BID7, as a trained recurrent neural network that allows encoding and retrieval of episodic information BID43, or as an optimization algorithm with learned parameters BID45 BID3 .The model-agnostic meta-learning (MAML) of BID12 is an instance of a learned optimization procedure that directly optimizes the standard gradient descent rule. The algorithm estimates an initial parameter set to be shared among the task-specific models; the intuition is that gradient descent from the learned initialization provides a favorable inductive bias for fast adaptation. However, this inductive bias has been evaluated only empirically in prior work BID12.In this work, we present a novel derivation of and a novel extension to MAML, illustrating that this algorithm can be understood as inference for the parameters of a prior distribution in a hierarchical Bayesian model. The learned prior allows for quick adaptation to unseen tasks on the basis of an implicit predictive density over task-specific parameters. The reinterpretation as hierarchical Bayes gives a principled statistical motivation for MAML as a meta-learning algorithm, and sheds light on the reasons for its favorable performance even among methods with significantly more parameters. More importantly, by casting gradient-based meta-learning within a Bayesian framework, we are able to improve MAML by taking insights from Bayesian posterior estimation as novel augmentations to the gradient-based meta-learning procedure. We experimentally demonstrate that this enables better performance on a few-shot learning benchmark. The goal of a meta-learner is to extract task-general knowledge through the experience of solving a number of related tasks. By using this learned prior knowledge, the learner has the potential to quickly adapt to novel tasks even in the face of limited data or limited computation time. Formally, we consider a dataset D that defines a distribution over a family of tasks T. These tasks share some common structure such that learning to solve a single task has the potential to aid in solving another. Each task T defines a distribution over data points x, which we assume in this work to consist of inputs and either regression targets or classification labels y in a supervised learning problem (although this assumption can be relaxed to include reinforcement learning problems; e.g., see BID12 . The objective of the meta-learner is to be able to minimize a task-specific performance metric associated with any given unseen task from the dataset given even only a small amount of data from the task; i.e., to be capable of fast adaptation to a novel task. In the following subsections, we discuss two ways of formulating a solution to the meta-learning problem: gradient-based hyperparameter optimization and probabilistic inference in a hierarchical Bayesian model. These approaches were developed orthogonally, but, in Section 3.1, we draw a novel connection between the two. A parametric meta-learner aims to find some shared parameters θ that make it easier to find the right task-specific parameters φ when faced with a novel task. A variety of meta-learners that employ gradient methods for task-specific fast adaptation have been proposed (e.g., BID2 BID26 BID57 . MAML BID12 is distinct in that it provides a gradient-based meta-learning procedure that employs a single additional parameter (the meta-learning rate) and operates on the same parameter space for both meta-learning and fast adaptation. These are necessary features for the equivalence we show in Section 3.1.To address the meta-learning problem, MAML estimates the parameters θ of a set of models so that when one or a few batch gradient descent steps are taken from the initialization at θ given a small sample of task data x j 1,..., x j N ∼ p T j (x) each model has good generalization performance on another sample x j N +1,..., x j N +M ∼ p T j (x) from the same task. The MAML objective in a maximum likelihood setting is DISPLAYFORM0 where we use φ j to denote the updated parameters after taking a single batch gradient descent step from the initialization at θ with step size α on the negative log-likelihood associated with the task T j. Note that since φ j is an iterate of a gradient descent procedure that starts from θ, each φ j is of the same dimensionality as θ. We refer to the inner gradient descent procedure that computes φ j as fast adaptation. The computational graph of MAML is given in Figure 1 (left). An alternative way to formulate meta-learning is as a problem of probabilistic inference in the hierarchical model depicted in Figure 1 (right). In particular, in the case of meta-learning, each task-specific parameter φ j is distinct from but should influence the estimation of the parameters {φ j | j = j} from other tasks. We can capture this intuition by introducing a meta-level parameter θ on which each task-specific parameter is statistically dependent. With this formulation, the mutual dependence of the task-specific parameters φ j is realized only through their individual dependence DISPLAYFORM0 The computational graph of the MAML BID12 algorithm covered in Section 2.1.Straight arrows denote deterministic computations and crooked arrows denote sampling operations. (Right) The probabilistic graphical model for which MAML provides an inference procedure as described in Section 3.1. In each figure, plates denote repeated computations (left) or factorization (right) across independent and identically distributed samples.on the meta-level parameters θ. As such, estimating θ provides a way to constrain the estimation of each of the φ j.Given some data in a multi-task setting, we may estimate θ by integrating out the task-specific parameters to form the marginal likelihood of the data. Formally, grouping all of the data from each of the tasks as X and again denoting by x j 1,..., x j N a sample from task T j, the marginal likelihood of the observed data is given by DISPLAYFORM1 Maximizing FORMULA2 as a function of θ gives a point estimate for θ, an instance of a method known as empirical Bayes BID5 BID15 due to its use of the data to estimate the parameters of the prior distribution. Hierarchical Bayesian models have a long history of use in both transfer learning and domain adaptation (e.g., BID25 BID58 BID14 BID8 BID56 . However, the formulation of meta-learning as hierarchical Bayes does not automatically provide an inference procedure, and furthermore, there is no guarantee that inference is tractable for expressive models with many parameters such as deep neural networks. In this section, we connect the two independent approaches of Section 2.1 and Section 2.2 by showing that MAML can be understood as empirical Bayes in a hierarchical probabilistic model. Furthermore, we build on this understanding by showing that a choice of update rule for the taskspecific parameters φ j (i.e., a choice of inner-loop optimizer) corresponds to a choice of prior over task-specific parameters, p(φ j | θ). In general, when performing empirical Bayes, the marginalization over task-specific parameters φ j in FORMULA2 is not tractable to compute exactly. To avoid this issue, we can consider an approximation that makes use of a point estimateφ j instead of performing the integration over φ in. Usingφ j as an estimator for each φ j, we may write the negative logarithm of the marginal likelihood as FORMULA3 recovers the unscaled form of the one-step MAML objective in. This tells us that the MAML objective is equivalent to a maximization with respect to the meta-level parameters θ of the marginal likelihood p(X | θ), where a point estimate for each task-specific parameter φ j is computed via one or a few steps of gradient descent. By taking only a few steps from the initialization at θ, the point estimateφ j trades off DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 Algorithm 2: Model-agnostic meta-learning as hierarchical Bayesian inference. The choices of the subroutine ML-· · · that we consider are defined in Subroutine 3 and Subroutine 4. DISPLAYFORM3 Subroutine 3: Subroutine for computing a point estimateφ using truncated gradient descent to approximate the marginal negative log likelihood (NLL).minimizing the fast adaptation objective − log p(x j 1, . . ., x j N | θ) with staying close in value to the parameter initialization θ. We can formalize this trade-off by considering the linear regression case. Recall that the maximum a posteriori (MAP) estimate of φ j corresponds to the global mode of the posterior DISPLAYFORM4 In the case of a linear model, early stopping of an iterative gradient descent procedure to estimate φ j is exactly equivalent to MAP estimation of φ j under the assumption of a prior that depends on the number of descent steps as well as the direction in which each step is taken. In particular, write the input examples as X and the vector of regression targets as y, omit the task index from φ, and consider the gradient descent update DISPLAYFORM5 for iteration index k and learning rate α ∈ R +. shows that, starting from φ = θ, φ (k) in solves the regularized linear least squares problem DISPLAYFORM6 with Q-norm defined by z Q = z T Q −1 z for a symmetric positive definite matrix Q that depends on the step size α and iteration index k as well as on the covariance structure of X. We describe the exact form of the dependence in Section 3.2. The minimization in can be expressed as a posterior maximization problem given a conditional Gaussian likelihood over y and a Gaussian prior over φ. The posterior takes the form DISPLAYFORM7 Since φ (k) in maximizes, we may conclude that k iterations of gradient descent in a linear regression model with squared error exactly computes the MAP estimate of φ, given a Gaussian-noised observation model and a Gaussian prior over φ with parameters µ 0 = θ and Σ 0 = Q. Therefore, in the case of linear regression with squared error, MAML is exactly empirical Bayes using the MAP estimate as the point estimate of φ. In the nonlinear case, MAML is again equivalent to an empirical Bayes procedure to maximize the marginal likelihood that uses a point estimate for φ computed by one or a few steps of gradient descent. However, this point estimate is not necessarily the global mode of a posterior. We can instead understand the point estimate given by truncated gradient descent as the value of the mode of an implicit posterior over φ ing from an empirical loss interpreted as a negative log-likelihood, and regularization penalties and the early stopping procedure jointly acting as priors (for similar interpretations, see BID47 BID6 BID9 .The exact equivalence between early stopping and a Gaussian prior on the weights in the linear case, as well as the implicit regularization to the parameter initialization the nonlinear case, tells us that every iterate of truncated gradient descent is a mode of an implicit posterior. In particular, we are not required to take the gradient descent procedure of fast adaptation that computesφ to convergence in order to establish a connection between MAML and hierarchical Bayes. MAML can therefore be understood to approximate an expectation of the marginal negative log likelihood (NLL) for each task T j as DISPLAYFORM8 The algorithm for MAML as probabilistic inference is given in Algorithm 2; Subroutine 3 computes each marginal NLL using the point estimate ofφ as just described. Formulating MAML in this way, as probabilistic inference in a hierarchical Bayesian model, motivates the interpretation in Section 3.2 of using various meta-optimization algorithms to induce a prior over task-specific parameters. From Section 3.1, we may conclude that early stopping during fast adaptation is equivalent to a specific choice of a prior over task-specific parameters, p(φ j | θ). We can better understand the role of early stopping in defining the task-specific parameter prior in the case of a quadratic objective. Omit the task index from φ and x, and consider a second-order approximation of the fast adaptation objective (φ) = − log p(x 1 . . ., x N | φ) about a minimum φ *: DISPLAYFORM0 where the Hessian H = ∇ 2 φ (φ *) is assumed to be positive definite so that˜ is bounded below. Furthermore, consider using a curvature matrix B to precondition the gradient in gradient descent, giving the update DISPLAYFORM1 If B is diagonal, we can identify as a Newton method with a diagonal approximation to the inverse Hessian; using the inverse Hessian evaluated at the point φ (k−1) recovers Newton's method itself. On the other hand, meta-learning the matrix B matrix via gradient descent provides a method to incorporate task-general information into the covariance of the fast adaptation prior, p(φ | θ). For instance, the meta-learned matrix B may encode correlations between parameters that dictates how such parameters are updated relative to each other. Formally, taking k steps of gradient descent from φ = θ using the update rule in gives a φ (k) that solves DISPLAYFORM2 The minimization in corresponds to taking a Gaussian prior p(φ | θ) with mean θ and covariance BID44 where B is a diagonal matrix that from a simultaneous diagonalization of H and B as O T HO = diag(λ 1, . . ., λ n) = Λ and DISPLAYFORM3 DISPLAYFORM4.., n (Theorem 8.7.1 in BID16 . If the true objective is indeed quadratic, then, assuming the data is centered, H is the unscaled covariance matrix of features, X T X. Identifying MAML as a method for probabilistic inference in a hierarchical model allows us to develop novel improvements to the algorithm. In Section 4.1, we consider an approach from Bayesian parameter estimation to improve the MAML algorithm, and in Section 4.2, we discuss how to make this procedure computationally tractable for high-dimensional models. We have shown that the MAML algorithm is an empirical Bayes procedure that employs a point estimate for the mid-level, task-specific parameters in a hierarchical Bayesian model. However, the use of this point estimate may lead to an inaccurate point approximation of the integral in if the posterior over the task-specific parameters, p(φ j | x j N +1, . . ., x j N +M, θ), is not sharply peaked at the value of the point estimate. The Laplace approximation BID24 BID29 a) is applicable in this case as it replaces a point estimate of an integral with the volume of a Gaussian centered at a mode of the integrand, thereby forming a local quadratic approximation. We can make use of this approximation to incorporate uncertainty about the task-specific parameters into the MAML algorithm at fast adaptation time. In particular, suppose that each integrand in has a mode φ * j at which it is locally well-approximated by a quadratic function. The Laplace approximation uses a second-order Taylor expansion of the negative log posterior in order to approximate each integral in the product in as DISPLAYFORM0 where H j is the Hessian matrix of second derivatives of the negative log posterior. Classically, the Laplace approximation uses the MAP estimate for φ * j, although any mode can be used as an expansion site provided the integrand is well enough approximated there by a quadratic. We use the point estimateφ j uncovered by fast adaptation, in which case the MAML objective in becomes an appropriately scaled version of the approximate marginal likelihood DISPLAYFORM1 The term log p(φ j | θ) from the implicit regularization imposed by early stopping during fast adaptation, as discussed in Section 3.1. The term 1 /2 log det(H j), on the other hand, from the Laplace approximation and can be interpreted as a form of regularization that penalizes model complexity. Using as a training criterion for a neural network model is difficult due to the required computation of the determinant of the Hessian of the log posterior H j, which itself decomposes into a sum of the Hessian of the log likelihood and the Hessian of the log prior as DISPLAYFORM0 In our case of early stopping as regularization, the prior over task-specific parameters p(φ j | θ) is implicit and thus no closed form is available for a general model. Although we may use the quadratic approximation derived in Section 3.2 to obtain an approximate Gaussian prior, this prior is not diagonal and does not, to our knowledge, have a convenient factorization. Therefore, in our experiments, we instead use a simple approximation in which the prior is approximated as a diagonal Gaussian with precision τ. We keep τ fixed, although this parameter may be cross-validated for improved performance. DISPLAYFORM1 Subroutine 4: Subroutine for computing a Laplace approximation of the marginal likelihood. Similarly, the Hessian of the log likelihood is intractable to form exactly for all but the smallest models, and furthermore, is not guaranteed to be positive definite at all points, possibly rendering the Laplace approximation undefined. To combat this, we instead seek a curvature matrixĤ that approximates the quadratic curvature of a neural network objective function. Since it is well-known that the curvature associated with neural network objective functions is highly non-diagonal (e.g., BID32, a further requirement is that the matrix have off-diagonal terms. Due to the difficulties listed above, we turn to second order gradient descent methods, which precondition the gradient with an inverse curvature matrix at each iteration of descent. The Fisher information matrix BID13 has been extensively used as an approximation of curvature, giving rise to a method known as natural gradient descent BID1 . A neural network with an appropriate choice of loss function is a probabilistic model and therefore defines a Fisher information matrix. Furthermore, the Fisher information matrix can be seen to define a convex quadratic approximation to the objective function of a probabilistic neural model BID38 BID31 . Importantly for our use case, the Fisher information matrix is positive definite by definition as well as non-diagonal. However, the Fisher information matrix is still expensive to work with. BID33 developed Kronecker-factored approximate curvature (K-FAC), a scheme for approximating the curvature of the objective function of a neural network with a block-diagonal approximation to the Fisher information matrix. Each block corresponds to a unique layer in the network, and each block is further approximated as a Kronecker product (see BID54 of two much smaller matrices by assuming that the second-order statistics of the input activation and the back-propagated derivatives within a layer are independent. These two approximations ensure that the inverse of the Fisher information matrix can be computed efficiently for the natural gradient. For the Laplace approximation, we are interested in the determinant of a curvature matrix instead of its inverse. However, we may also make use of the approximations to the Fisher information matrix from K-FAC as well as properties of the Kronecker product. In particular, we use the fact that the determinant of a Kronecker product is the product of the exponentiated determinants of each of the factors, and that the determinant of a block diagonal matrix is the product of the determinants of the blocks BID54 . The determinants for each factor can be computed as efficiently as the inverses required by DISPLAYFORM2 We make use of the Laplace approximation and K-FAC to replace Subroutine 3, which computes the task-specific marginal NLLs using a point estimate forφ. We call this method the Lightweight Laplace Approximation for Meta-Adaptation (LLAMA), and give a replacement subroutine in Subroutine 4. The goal of our experiments is to evaluate if we can use our probabilistic interpretation of MAML to generate samples from the distribution over adapted parameters, and futhermore, if our method can be applied to large-scale meta-learning problems such as miniImageNet. and amplitudes, and the interpretation of the method as hierarchical Bayes makes it practical to directly sample models from the posterior. In this figure, we illustrate various samples from the posterior of a model that is meta-trained on different sinusoids, when presented with a few datapoints (in red) from a new, previously unseen sinusoid. Note that the random samples from the posterior predictive describe a distribution of functions that are all sinusoidal and that there is increased uncertainty when the datapoints are less informative (i.e., when the datapoints are sampled only from the lower part of the range input, shown in the bottom-right example). The connection between MAML and hierarchical Bayes suggests that we should expect MAML to behave like an algorithm that learns the mean of a Gaussian prior on model parameters, and uses the mean of this prior as an initialization during fast adaptation. Using the Laplace approximation to the integration over task-specific parameters as in assumes a task-specific parameter posterior with mean at the adapted parametersφ and covariance equal to the inverse Hessian of the log posterior evaluated at the adapted parameter value. Instead of simply using this density in the Laplace approximation as an additional regularization term as in FORMULA0, we may sample parameters φ j from this density and use each set of sampled parameters to form a set of predictions for a given task. To illustrate this relationship between MAML and hierarchical Bayes, we present a meta-dataset of sinusoid tasks in which each task involves regressing to the output of a sinusoid wave in Figure We observe in FIG0 that our method allows us to directly sample models from the task-specific parameter distribution after being presented with 10 datapoints from a new, previously unseen sinusoid curve. In particular, the column on the right of FIG0 demonstrates that the sampled models display an appropriate level of uncertainty when the datapoints are ambiguous (as in the bottom right). We evaluate LLAMA on the miniImageNet BID40 1-shot, 5-way classification task, a standard benchmark in few-shot classification. miniImageNet comprises 64 training classes, 12 validation classes, and 24 test classes. Following the setup of BID55, we structure the N -shot, J-way classification task as follows: The model observes N instances of J unseen classes, and is evaluated on its ability to classify M new instances within the J classes. We use a neural network architecture standard to few-shot classification (e.g., BID55 BID40, consisting of 4 layers with 3 × 3 convolutions and 64 filters, followed by batch normalization (BN) BID19, a ReLU nonlinearity, and 2 × 2 max-pooling. For the scaling variable β and centering variable γ of BN (see BID19, we ignore the fast adaptation update as well as the Fisher factors for K-FAC. We use Adam BID20 as the meta-optimizer, and standard batch gradient descent with a fixed learning rate to update the model BID53 49.82 ± 0.78 BID12 48.70 ± 1.84 LLAMA (Ours) DISPLAYFORM0 49.40 ± 1.83 during fast adaptation. LLAMA requires the prior precision term τ as well as an additional parameter η ∈ R + that weights the regularization term log detĤ contributed by the Laplace approximation. We fix τ = 0.001 and selected η = 10 −6 via cross-validation; all other parameters are set to the values reported in BID12.We find that LLAMA is practical enough to be applied to this larger-scale problem. In particular, our TensorFlow implementation of LLAMA trains for 60,000 iterations on one TITAN Xp GPU in 9 hours, compared to 5 hours to train MAML. As shown in TAB1, LLAMA achieves comparable performance to the state-of-the-art meta-learning method by BID53. While the gap between MAML and LLAMA is small, the improvement from the Laplace approximation suggests that a more accurate approximation to the marginalization over task-specific parameters will lead to further improvements. Meta-learning and few-shot learning have a long history in hierarchical Bayesian modeling (e.g., BID51 BID11 BID25 BID58 BID14 BID8 BID56 . A related subfield is that of transfer learning, which has used hierarchical Bayes extensively (e.g., BID39 . A variety of inference methods have been used in Bayesian models, including exact inference BID22, sampling methods BID41, and variational methods BID10 . While some prior works on hierarchical Bayesian models have proposed to handle basic image recognition tasks, the complexity of these tasks does not yet approach the kinds of complex image recognition problems that can be solved by discriminatively trained deep networks, such as the miniImageNet experiment in our evaluation BID30 .Recently, the Omniglot benchmark Lake et al. FORMULA0 has rekindled interest in the problem of learning from few examples. Modern methods accomplish few-shot learning either through the design of network architectures that ingest the few-shot training samples directly (e.g., BID21 BID55 BID48 BID17 BID53, or formulating the problem as one of learning to learn, or meta-learning (e.g., BID45 BID4 BID46 BID3 . A variety of inference methods have been used in Bayesian models, including exact inference , sampling methods BID42, and variational methods BID10.Our work bridges the gap between gradient-based meta-learning methods and hierarchical Bayesian modeling. Our contribution is not to formulate the meta-learning problem as a hierarchical Bayesian model, but instead to formulate a gradient-based meta-learner as hierarchical Bayesian inference, thus providing a way to efficiently perform posterior inference in a model-agnostic manner. We have shown that model-agnostic meta-learning (MAML) estimates the parameters of a prior in a hierarchical Bayesian model. By casting gradient-based meta-learning within a Bayesian framework, our analysis opens the door to novel improvements inspired by probabilistic machinery. As a step in this direction, we propose an extension to MAML that employs a Laplace approximation to the posterior distribution over task-specific parameters. This technique provides a more accurate estimate of the integral that, in the original MAML algorithm, is approximated via a point estimate. We show how to estimate the quantity required by the Laplace approximation using Kroneckerfactored approximate curvature (K-FAC), a method recently proposed to approximate the quadratic curvature of a neural network objective for the purpose of a second-order gradient descent technique. Our contribution illuminates the road to exploring further connections between gradient-based metalearning methods and hierarchical Bayesian modeling. For instance, in this work we assume that the predictive distribution over new data-points is narrow and well-approximated by a point estimate. We may instead employ methods that make use of the variance of the distribution over task-specific parameters in order to model the predictive density over examples from a novel task. Furthermore, it is known that the Laplace approximation is inaccurate in cases where the integral is highly skewed, or is not unimodal and thus is not amenable to approximation by a single Gaussian mode. This could be solved by using a finite mixture of Gaussians, which can approximate many density functions arbitrarily well BID49 BID0. The exploration of additional improvements such as this is an exciting line of future work. | A specific gradient-based meta-learning algorithm, MAML, is equivalent to an inference procedure in a hierarchical Bayesian model. We use this connection to improve MAML via methods from approximate inference and curvature estimation. | 602 | scitldr |
This work provides an automatic machine learning (AutoML) modelling architecture called Autostacker. Autostacker improves the prediction accuracy of machine learning baselines by utilizing an innovative hierarchical stacking architecture and an efficient parameter search algorithm. Neither prior domain knowledge about the data nor feature preprocessing is needed. We significantly reduce the time of AutoML with a naturally inspired algorithm - Parallel Hill Climbing (PHC). By parallelizing PHC, Autostacker can provide candidate pipelines with sufficient prediction accuracy within a short amount of time. These pipelines can be used as is or as a starting point for human experts to build on. By focusing on the modelling process, Autostacker breaks the tradition of following fixed order pipelines by exploring not only single model pipeline but also innovative combinations and structures. As we will show in the experiment section, Autostacker achieves significantly better performance both in terms of test accuracy and time cost comparing with human initial trials and recent popular AutoML system. Machine Learning nowadays is the main approach for people to solve prediction problems by utilizing the power of data and algorithms. More and more models have been proposed to solve diverse problems based on the character of these problems. More specifically, different learning targets and collected data correspond to different modelling problems. To solve them, data scientists not only need to know the advantages and disadvantages of various models, they also need to manually tune the hyperparameters within these models. However, understanding thoroughly all of the models and running experiments to tune the hyperparameters involves a lot of effort and cost. Thus, automating the modelling procedure is highly desired both in academic areas and industry. An AutoML system aims at providing an automatically generated baseline with better performance to support data scientists and experts with specific domain knowledge to solve machine learning problems with less effort. The input to AutoML is a cleanly formatted dataset and the output is one or multiple modelling pipelines which enables the data scientists to begin working from a better starting point. There are some pioneering efforts addressing the challenge of finding appropriate configurations of modelling pipelines and providing some mechanisms to automate this process. However, these works often rely on fixed order machine learning pipelines which are obtained by mimicking the traditional working pipelines of human experts. This initial constraint limits the potential of machine to find better pipelines which may or may not be straightforward, and may or may not have been tried by human experts before. In this work, we present an architecture called Autostacker which borrows the stacking BID1 method from ensemble learning, but allows for the discovery of pipelines made up of simply one model or many models combined in an innovative way. All of the automatically generated pipelines from Autostacker will provide a good enough starting point compared with initial trials of human experts. However, there are several challenges to accomplish this:• The quality of the datasets. Even though we are stepping into a big data era, we have to admit that there are still a lot of problems for which it is hard to collect enough data, especially data with little noise, such as historical events, medical research, natural disasters and so on. We tackle this challenge by always using the raw dataset in all of the stacking layers Figure 1: This figure describes the pipeline architecture of Autostacker. Autostacker pipelines consists of one or multiple layers and one or multiple nodes inside each layer. Each node represents a machine learning primitive model, such as SVM, MLP, etc. The number of layers and the number of nodes per layer can be specified beforehand or they can be changeable as part of the hyperparameters. In the first layer, the raw dataset is used as input. Then in the following layers, the prediction from each node will be added to the raw dataset as synthetic features (new colors). The new dataset generated by each layer will be used as input to the next layer.while also adding synthetic features in each stacking layer to fully use the information in the current dataset. More details are provided in the Approach section below.• The generalization ability of the AutoML framework. As mentioned above, existing AutoML frameworks only allow systems to generate an assembly line from data preprocessing and feature engineering to model selection where only a specific single model will be utilized by plugging in a previous model library. In this paper, depending on the computational cost and time cost, we make the number of such primitive models a variable which can be changed dynamically during the pipeline generation process or initialized in the beginning. This means that the simplest pipeline could be a single model, and the most complex pipeline could contain hundreds of primitive models as shown in Figure 1 • The large space of variables. The second challenge mentioned above leads to this problem naturally. Considering the whole AutoML framework, variables include the type of primitive machine learning models, the configuration settings of the framework (for instance, the number of primitive models in each stacking layer) and the hyperparameters in each primitive model. One way to address this issue is to treat this as an optimization problem BID3. Here in this paper, we instead treat this challenge as a search problem. We propose to use a naturally inspired algorithm, Parallel Hill Climbing (PHC), BID10 to effectively search for appropriate candidate pipelines. To make the definition of the problem clear, we will use the terminology listed below throughout this paper:• Primitive and Pipeline: primitive denotes an existed single machine learning model, for example, a DecisionTree. In addition, these also include traditional ensemble learning models, such as Adaboost and Bagging. The pipeline is the form of the output of Autostacker, which is a single primitive or a combination of primitives.• Layer and Node: Figure 1 shows the architecture of Autostacker which is formed by multiple stacking layers and multiple nodes in each layers. Each node represents a machine learning primitive model. Automated Machine Learning has recently gained more attention and there are a variety of related research programs underway and tools coming out. In this section, we first describe recent work in this field and then explain where our work fits in. The current focus in AutoML mainly consists of two parts: machine learning pipeline building and intelligent model hyperparameter search. By extending the fixed pipeline used in works mentioned above, such as Auto-Weka and Autosklearn, one of the most recent and popular framework called TPOT Olson et al. FORMULA0 allows for parallel feature engineering which happens before model prediction. However, all of the works mentioned above follow the same machine learning pipeline. TPOT also uses Evolutionary Algorithms to treat the parameter configuration problem as a search problem. In this work, we use TPOT as one of our baselines. As described above, there is very little work trying to discover innovative pipelines, even with traditional building blocks, such as sklearn. Pushing the ability of machines to be able to discover innovative machine learning building pipelines, such as new combinations or new arrangements, is necessary to cover a larger space of possible architectures. In this work, we encourage Autostacker to fulfill this requirement in two ways: 1. generating models with new combinations and 2. generating models with completely innovative architectures. In terms of the optimization methods, we offer an alternative solution to search for the settings with Parallel Hill Climbing which is very effective, especially when we are faced with a giant possible search space. The success of using this kind of strategy on large scale AutoML is also proved in TPOT. The working process of Autostacker is shown in Figure 2 and the overview of pipeline architecture built by Autostacker is hown in Figure 1. The whole pipeline consists of multiple layers, where each layer contains multiple nodes. These nodes are the primitive machine learning models. The ith layer takes in the dataset X i, and outputs the prediction Y i,j, where Y i,j denotes the prediction of the jth node in the ith layer (i = 0, 1, 2, ..., I, j = 0, 1, 2, ..., J). After each layer's prediction, we add these prediction back to the dataset used as input to the layer as synthetic features, and then use this newly generated dataset as the input of the next layer. With each new layer, the dataset gets more and more synthetic features until the last layer which only consists of a single node. We take the output of the last layer as the final output of this machine learning problem. Again, if we use f k to denote the kth (k = 0, 1, 2, ..., K) feature in the dataset, the final dataset will contain DISPLAYFORM0 features in total and this new dataset will be used in the last layer prediction. N i (0,1,2,...) is the number of nodes in the ith layer. The total number of features in the dataset before the last layer can be specified by users. Unlike the traditional stacking algorithm in ensemble learning, which only feeds the prediction into next layer as inputs, this proposed architecture always keeps the information directly from the raw dataset. Here are the considerations: Figure 2: The overview process of Autostacker and its usage is shown here. First, we start to build the model pipelines by generating initial pipelines with dynamic configurations of architecture and hyperparameters, feeding into PHC algorithm, and looping the process to generate winning pipelines. Then the winning pipelines can be used as better baselines for data scientists, or we can analyze the pattern in the winning pipeline sets for further meta-learning usage.• The number of items in the dataset could be very small. If so, the prediction from each layer could contain very little information about the problem and it is very likely that the primitives bias the outcomes a lot. Accordingly, throwing away the raw dataset could lead to high-biased prediction which is not suitable for generalization, especially for situations where we could have more training data in the future.• Moreover, by combining the new synthetic features with the raw dataset, we implicitly give some features more weight when these features are important for prediction accuracy. Yet we do not delete the raw dataset because we do not fully trust the primitives in individual layers. We can consequently reduce the influences of bias coming from individual primitive and noise coming from the raw dataset. There are multiple hyperparameters within this architecture:• I and J: the maximum number of layers and the maximum number of nodes corresponding to each layer.• H: the hyperparameters in each primitive.• The types of the primitives. Here we provide a dictionary of primitives which only serves as a search space. Note that Autostacker provides two specifications for I and J. The default mode is to let users simply specify the maximum range of I and J. Only two positive integers are needed to enable Autostacker to explore different configurations. There are two advantages here: 1. This mode frees the system of constraints and allows for the discover of further possible innovative pipelines. 2. This speed up the whole process significantly. We will illustrate this point in the Experiment section later. Another choice is to explicitly denote the value of I and J. This allows systems to build pipelines with a specific number of layers and number of nodes per layer based on allowed computational power and time. The search algorithm for finding the appropriate hyperparameters is described in the next section. In this paper, the Parallel Hill Climber (PHC) Algorithm has been chosen as the search algorithm to find the group of hyperparameters which can lead to better baseline model pipelines. PHC is commonly used as baseline algorithm in the development of Evolutionary Algorithm. As we will show later, our system can already achieve significantly better performance with this straightforward baseline algorithm. Algorithm 1 provides the details of this algorithm in our system. First, we generate N completed pipelines by randomly selecting the hyperparameters. Then we run a one step Hill Climber on top of these N pipelines to get another N pipelines. The one-step Hill Climber essentially just randomly changes one of the hyperparameters, for example the number of estimators in a Random Forest Classifier. Now we train these 2N pipelines and evaluate them through cross validation. Then N pipelines with the highest validation accuracies are selected as the seed pipelines for the next generation of Parallel Hill Climber. Once the seed pipelines are ready, another one step Hill Climber will be applied on them and another round of evaluation and selection will be executed afterwards. The same loop continues until the end of all the iterations, where the number of iterations M can be specified by users. eva = EV ALU AT E(eva pip) DISPLAYFORM0 sel pip = SELECT (eva pip, eva , N) iter init = sel pip 10: end for 11: Return sel pip 12: function HILLCLIMBER(list pip) for each integer i in length of list pip do Return sel pip 28: end function This section presents the training and testing procedure. The training process happens in the evaluation step as shown above. Corresponding to our hierarchical framework, the pipeline is trained layer by layer. Inside each layer, each primitive is also trained independently with the same dataset. The next layer will be trained after integrating the previous dataset with with prediction from the previous trained layer. Similarly, the validation process and testing process share the same mechanism but with validation set and test set respectively. After training and validating the pipelines, we pick the first 10 pipelines with the highest validation accuracies as the final output of Autostacker. We believe that these 10 pipelines can provide better baselines for human experts to get started with the problem. Here choosing the top 10 pipelines instead of the first one pipeline directly is based on the consideration that the input might be a small amount of data which is more likely to be unbalanced. Yet unbalanced data cannot guarantee that the performance on the validation process can fully represent that on the testing process. For example, two pipelines with the same validation might behave very differently on the same test dataset. Hence, it is necessary to provide a set of candidates which can be guaranteed to do better on average so that human experts can fine tune the pipelines subsequently. Another significant advantage of our approach is that the system is very flexible to scale up and parallelize. Starting from the initial generation, one-step hill climbing, training, validation to evaluation, each pipeline runs independently, which means that each worker can work on one pipeline alone. There is no frequent communication or sequential decision making among all the workers and each worker can run through the pipeline separately. They only need to share the validation to be ranked by the end of each iteration. Then one shot selection based on the validation accuracy will be applied on the outputs of the parallel workers. More specifically, in terms of the Algorithm 1 we show above, Random, HILLCLIMBER, and EVALUATE function are all very easily parallelized when the system runs. To show the performance of our system, we selected 15 datasets from the benchmark dataset provided in BID9 which collects and cleans datasets from public data resources, such as and UCILichman FORMULA0 etc., as the sample experimental data. According to the published in TPOT, we arbitrarily choose 9 datasets claimed to have better in TPOT comparing with Random Forest Classifier, 4 datasets with worse performance in TPOT and 2 datasets with same performance with Random Forest Classifier in TPOT. We limit the total number of datasets to be 15 to show here to cover all cases of datasets in TPOT. These datasets come from different problem domains and target different machine learning tasks including binary classification and multi-class classification. Autostacker is also compatible with regression problems. We will release on more benchmark datasets as well as the code base. The data is cleaned in terms of filling in the missing values with large negative values or deleting the data points with missing values. Other than that, there is no other data preprocessing or feature preprocessing in Autostacker. It would certainly be possible to use preprocessing on the dataset and features as another building block or hyperparameter in Autostacker, and we also provide this flexibility in our system. Nevertheless, in this paper we focus only on the modelling process to show our contribution to the architecture and automation process. Before each round of the experiment, we shuffle and partition the dataset to 80%/20% as training/testing data. The goal of Autostacker is to provide a better baseline pipeline for data scientists in an automatic way. Thus, the baseline we choose to compare with should be able to represent the prediction ability of pipelines coming from the initial trials of data scientists. The baseline pipeline that we compare with is chosen to be Random Forest Classifier / Regressor with the number of estimators being 500 as ensemble learning models like Random Forest have been shown to work well on average in practice when considering multi-model predictions. We also compare our to those of the TPOT model from BID8 which is one of the most recent and popular AutoML systems. Currently, our primitives are from the scikit-learn library BID11 and XGboost libaray BID2 The full list is in TAB0 in the appendix. In Autostacker, users are allowed to plug in any primitives they like as long as the function signatures are consistent with our current code base. In terms of the basic structure (number of layers and number of nodes per layer) of the candidate pipelines, as we mentioned above, there are two types of settings provided in Autostacker. In this section, we show the performance of the default mode of Autostacker: dynamic configurations. We specify the maximum range of number of layers to be 5 and the maximum range of number of nodes per layer to be 3. In this section, we will show the of the test accuracy and time cost of Autostacker as well as comparisons with the Random Forest and TPOT baselines. The test accuracy is calculated using balanced accuracy BID17. We refer to them as test accuracy in the rest of this paper. We ran 10 rounds of experiments for Random Forest Classifier and 3 to 10 rounds of experiment for TPOT based on the computation and time cost. For Autostacker, 3 rounds of experiments are executed on each dataset and the datasets get shuffled before each round. Thus, the figure contains 30 test in total where each 10 of them come from 1 round experiment. The notches in the box plot represent the 95% confidence intervals of median values. We use one machine with 24 CPUs to parallelize each experiment for each architecture. As shown in FIG5, all the left side columns shows the test accuracy comparisons on the 15 sample datasets. From the comparisons, we can tell several things:• Autostacker achieves 100% better test accuracy compared with Random Forest Baselines, and 13 out of 15 better accuracy compared with TPOT, while the rest Hill Valley with noise and vehicle datasets achieve extremely similar or slight lower accuracy according to median values.• Autostacker is much more robust. Autostakcer can always provide better baselines to speed up further work, while Random Forest fails to give any better on the parity5 dataset, and TPOT fails to provide better baselines than Random Forest Classifier on the breastcancer, pima, ecoli, wine-recognition and cars datasets after spending a couple of hours searching. This kind of guarantee of not being worse on average comes from the characteristic of Autostacker: the innovative stacking architecture which fully utilizes the predictions coming from different primitive models as well as the whole information of the raw dataset. In the meantime, Autostacker does not give up the single model case if it is better than a stacking architecture. Hence, Autostacker essentially contains the Random Forst Baseline we are using here. All the right side columns in FIG5 show the time cost among comparisons. Autostacker largely reduce the time usage up to 6 times comparing with TPOT on all the sample datasets. In of the experiment , the output of Autostacker can improve the baseline pipeline sufficiently enough for human experts to start with better pipelines within a short mount of time, and Autostacker achieves significantly better performance on sample datasets than all baseline comparisons. During the experiments and research process, we noticed that Autostacker still has several limitations. Here we will describe these limitations and possible future solutions:• The ability to automate the machine learning process for large scale datasets is limited. Nowadays, there are more sophisticated models or deep learning approaches which achieve very good on large scale datasets and multi-task problems. Our current primitive library and modelling structure is very limited at solving these problems. One of the future solutions could be to incorporate more advanced primitives and to choose to use them when necessary.• Autostacker can be made more efficient with better search algorithms. There are a lot of modern evolutionary algorithms, and some of them are based on the Parallel Hill Climber that we use in this work. We believe that Autostacker could be made faster by incorporating them. We also believe traditional methods and knowledge from statistics and probability will be very helpful to better understand the output of Autostacker, such as by answering questions like: why do was a particular pipeline chosen as one of the final candidate pipelines? In this work, we contribute to automating the machine learning modelling process by proposing Autostacker, a machine learning system with an innovative architecture for automatic modelling and a well-behaved efficient search algorithm. We show how this system works and what the performance of this system is, comparing with human initial trails and related state of art techniques. We also demonstrate the scaling and parallelization ability of our system. In , we automate the machine learning modelling process by providing an efficient, flexible and well-behaved system which provides the potential to be generalized into complicated problems and is able to be integrated with data and feature processing modules. | Automate machine learning system with efficient search algorithm and innovative structure to provide better model baselines. | 603 | scitldr |
Surrogate models can be used to accelerate approximate Bayesian computation (ABC). In one such framework the discrepancy between simulated and observed data is modelled with a Gaussian process. So far principled strategies have been proposed only for sequential selection of the simulation locations. To address this limitation, we develop Bayesian optimal design strategies to parallellise the expensive simulations. We also touch the problem of quantifying the uncertainty of the ABC posterior due to the limited budget of simulations. Approximate Bayesian computation is used for Bayesian inference when the analytic form of the likelihood function of a statistical model of interest is either unavailable or too costly to evaluate, but simulating the model is feasible. Unfortunately, many models e.g. in genomics and epidemiology (; ;) and climate science are costly to simulate making sampling-based ABC inference algorithms infeasible. To increase sample-efficiency of ABC, various methods using surrogate models such as neural networks (; ; ;) and Gaussian processes (; ; ; Järvenpää et al., 2018 Järvenpää et al.,, 2019a have been proposed. In one promising surrogate-based ABC framework the discrepancy between the observed and simulated data is modelled with a Gaussian process (GP) (; Järvenpää et al., 2018 Järvenpää et al.,, 2019a . Sequential Bayesian experimental design (or active learning) methods to select the simulation locations so as to maximise the sample-efficiency in this framework were proposed by Järvenpää et al. (2019a). However, one often has access to multiple computers to run some of the simulations in parallel. In this work, motivated by the related problem of batch Bayesian optimisation (; ; ;) and the parallel GP-based method by Järvenpää et al. (2019b) for inference tasks where noisy and potentially expensive log-likelihood evaluations can be obtained, we resolve this limitation by developing principled batch simulation methods which considerably decrease the wall-time needed for ABC inference. The posterior distribution is often summarised for further decision making using e.g. expectation and variance. When the computational resources for ABC inference are limited, it would be important to assess the accuracy of such summaries, but this has not been explicitly acknowledged in earlier work. We devise an approximate numerical method to propagate the uncertainty of the discrepancy, represented by the GP model, to the ing ABC posterior summaries. We call our ing framework as Bayesian ABC in analogy with the related problems of Bayesian quadrature (; ;) and Bayesian optimisation (BO) . Let π(θ) denote the prior density of the (continuous) parameters θ ∈ Θ ⊂ R p of a statistical model of interest and π(x obs | θ) corresponding intractable likelihood function. Standard ABC algorithms such as the ABC rejection sampler target the approximate posterior batch setting where b simulations are simultaneously selected to be computed in parallel at each iteration of our algorithm. Consider a loss function l: D 2 → R + so that l(π ABC, d) quantifies the penalty of reporting d ∈ D as our ABC posterior when the true one is π ABC ∈ D. Given D t, the one-batch-ahead Bayes-optimal selection of the next batch of b, where In Eq. 3, an expectation over b future discrepancy evaluations ∆ * = (∆ * 1, . . ., ∆ * b) at locations θ * needs to be computed assuming ∆ * follows the current GP model. The expectation is taken of the Bayes risk L(Π f Dt∪D *) ing from the nested decision problem of choosing the estimator d, assuming ∆ * are known and merged with current data. Using a loss function based onπ ) 2 dθ between the unnormalised ABC posteriorπ f ABC and its estimatord, then the optimal estimator is the mean in Eq. 11. The ing expected integrated variance (EIV) acquisition function, denoted as where and T is Owen's T function. We use greedy optimisation as is also common in batch BO (see, e.g., ; and the integral over Θ is computed using importance sampling. We can also show that the corresponding L 1 loss function produces the marginal median in Eq. 13 of the Appendix as the optimal estimator. The ing acquisition function, called expected integrated MAD (EIMAD), in addition to some heuristically-motivated batch methods used as baselines (called MAXV, MAXMAD), are developed in Appendix A.2. Pointwise marginal uncertainty of the unnormalised ABC posteriorπ f ABC was used for selecting the simulation locations. However, knowingπ f ABC and its marginal uncertainty in some individual θ-values is not very helpful for understanding the accuracy of the final estimate of π f ABC. Computing the distribution of e.g. ABC posterior expectation or marginals using π f ABC in Eq. 2 is clearly more intuitive. Unfortunately, such computations are difficult due to the nonlinear dependence on the infinite-dimensional quantity f. We propose a simulation-based approach where we combine drawing of GP sample paths and normalised importance sampling. For full details and an illustration, see Appendix A.3 and Fig. 3. We use two real-world simulation models to compare the performance of the sequential and synchronous batch versions of the acquisition methods. As a simple baseline, we consider random points (RAND) drawn from the prior. ABC-MCMC with extensive simulations is used to compute the ground truth ABC posterior. Median total variation distance (TV) over 50 repeated simulations is used to measure the quality of approximations. See Appendix B for further details and C for additional . Lorenz model. This model describes the dynamics of slow weather variables and their dependence on unobserved fast weather variables. The model is represented by a coupled stochastic differential equation which can only be solved numerically ing in an intractable likelihood. The model has two parameters θ = (θ 1, θ 2) which we estimate from timeseries data. for full details and the experimental set-up that we also use, with the exception that we set θ ∼ U(×[0, 0.5]). The are shown in Fig. 1(a). Furthermore, Fig. 1 (b-c) demonstrates the uncertainty quantification of the expectation of the model-based ABC posterior. The effect of batch size is shown in Fig. 2(c). Bacterial infections model. This model describes transmission dynamics of bacterial infections in day care centers and features intractable likelihood function . We estimate the internal, external and co-infection parameters β ∈, Λ ∈ and θ ∈, respectively, using true data and uniform priors. The discrepancy is formed as in. The with all methods are shown in Fig. 2(a) and Fig. 2(b) shows the effect of batch size for the two best methods. Discussion. We obtain reasonable posterior approximations considering the very limited budget of simulations. EIV and EIMAD tend to produce more stable and accurate ABC posterior estimates than MAXV and MAXMAD. Difference in approximation quality between EIV and EIMAD, both based on the same Bayesian decision theoretic framework but different loss functions, was small. In all cases, our batch strategies produced similar evaluation locations as the corresponding sequential methods. This suggests that substantial improvements in wall-time can be obtained when the simulations are costly. The convergence of the uncertainty in the ABC posterior expectation in Fig. 1(b-c) is approximately towards the true ABC posterior expectation due to a slight GP misspecification. The ABC posterior marginals of the bacterial infection model in Appendix C contain some uncertainty after 600 iterations which our approach allows to rigorously quantify. Developing more effective (analytical) methods for computing these uncertainty estimates is an interesting topic for future work. corresponding vector of test point θ. For further details of GP regression, see e.g.. Formulas for the mean, median and variance ofπ f ABC were derived by Järvenpää et al. (2019a) in the case of a zero mean GP prior. It is easy to see that these formulas hold also for our more general GP model. For example, where med denotes the marginal (i.e. elementwise) median. The EIMAD acquisition function, denoted as L m t (θ *) can be shown to be where, similarly as for EIV in Eq. 4, T is Owen's T function and a t is given by Eq. 12. MAD stands for mean absolute deviation (around median). We do not show a detailed derivation of EIV and EIMAD acquisition functions here but only note that these can be obtained using similar computations as in Järvenpää et al. (2019a,b). We consider also a heuristic acquisition function which evaluates where the pointwise uncertainty ofπ f ABC (θ) is highest. Such intuitive strategy is sometimes called as uncertainty sampling and used, e.g., by , Järvenpää et al. (2019a) and. When variance is used as the measure of uncertainty ofπ f ABC (θ), we call the method as MAXV and when MAD is used, we obtain an alternative strategy called analogously MAXMAD. The ing acquisition functions can be computed analytically. Specifically, the variance is computed using Eq. 14. A similar formula can be derived for MAD. Finally, we propose a heuristic approach from BO to parallellise MAXV and MAXMAD strategies: The first point in the batch is chosen as in the sequential case. The further points are iteratively selected as the locations where the expected variance (or MAD), taken with respect to the discrepancy values of the pending points, that is points that have been already chosen to the current batch, is highest. The ing acquisition functions are immediately obtained as the integrands of Eq. 4 and 15. A.3. Uncertainty quantification of the ABC posterior (1 one MCMC sampling from the instrumental density and scales as O(nt 2) so that n can be large. Total cost is O((n +ñ)t 2 +ñ 2 (t + s) +ñ 3 ). We briefly describe some key details of our algorithm and the experiments. Locations for fitting the initial GP model are sampled from the uniform prior in all cases. We take 10 initial points for 2D and 20 for 3D cases. We use b = 0, B ij = 10 2 1 i=j and include basis functions of the form 1, θ i, θ 2 i. The discrepancy ∆ θ is assumed smooth and we use the squared exponential covariance function are given weakly informative priors and their values are obtained using MAP estimation at each iteration. Owen's T function values are computed using a C-implementation of the algorithm by. For simplicity and to ensure meaningful comparisons to ground-truth, we fix ε to certain small predefined values although, in practice, its value is set adaptively (Järvenpää et al., 2019a) or based on pilot runs. We compute the estimate of the unnormalised ABC posterior using the Eq. 11 for MAXV, EIV, RAND and Eq. 13 for MAXMAD, EIMAD. Adaptive MCMC is used to sample from the ing ABC posterior estimates and from instrumental densities needed for IS approximations. TV denotes the median total variation distance between the estimated ABC posterior and the true one (2D) or the average TV between their marginal TV values (3D) computed numerically over 50 repeated runs. Iteration (i.e. number of batches chosen) serves as a proxy for wall-time. The number of simulations i.e. the maximum value of t is fixed in all experiments and the batch methods thus finish earlier. Mahalanobis distance was used as the discrepancy for Lorenz model. The simulation model was run 500 times to estimate the covariance matrix of the six summary statistics by at the true parameter and the the inverse of the covariance matrix was used in the Mahalanobis distance. Of course, such discrepancy is unavailable in practice because the true parameter is unknown and the computational budget limited. However, as the main goal of this paper is to approximate any given ABC posterior with a limited simulation budget, we chose our target ABC posterior this way. defined a discrepancy for the bacterial infections model by summing four L 1 -distances computed between certain individual summaries. For details, see example 7 in. We used the same discrepancy except that we further took square root of their discrepancy function. We obtained a similar ABC posterior as the original article where ABC-PMC algorithm and slightly different approach for comparing the data sets were used. show the 10 initial points and the black dots 100 additional points selected using each acquisition function (the last two batches in the second row are however highlighted by red plus-signs and crosses). The TV value in the title shows the total variation distance between the true and estimated ABC posteriors for each particular case. Fig. 5 and 6 show typical estimated ABC posterior densities of the Lorenz and bacterial infections models, respectively. These are shown to demonstrate the accuracy obtainable with very limited simulations. These particular were obtained with the sequential EIV method using 600 iterations corresponding to 610 simulations (Lorenz model) or 620 simulations (bacterial infections model). Fig. 7 illustrates the ABC posterior uncertainty quantification for the bacterial infections model. Sequential EIV method was used and one typical case is shown. The suggest that while the ABC posterior is well estimated at the last iteration, there is some uncertainty left about its exact shape. | We propose principled batch Bayesian experimental design strategies and a method for uncertainty quantification of the posterior summaries in a Gaussian process surrogate-based approximate Bayesian computation framework. | 604 | scitldr |
Randomly initialized first-order optimization algorithms are the method of choice for solving many high-dimensional nonconvex problems in machine learning, yet general theoretical guarantees cannot rule out convergence to critical points of poor objective value. For some highly structured nonconvex problems however, the success of gradient descent can be understood by studying the geometry of the objective. We study one such problem -- complete orthogonal dictionary learning, and provide converge guarantees for randomly initialized gradient descent to the neighborhood of a global optimum. The ing rates scale as low order polynomials in the dimension even though the objective possesses an exponential number of saddle points. This efficient convergence can be viewed as a consequence of negative curvature normal to the stable manifolds associated with saddle points, and we provide evidence that this feature is shared by other nonconvex problems of importance as well. Many central problems in machine learning and signal processing are most naturally formulated as optimization problems. These problems are often both nonconvex and highdimensional. High dimensionality makes the evaluation of second-order information prohibitively expensive, and thus randomly initialized first-order methods are usually employed instead. This has prompted great interest in recent years in understanding the behavior of gradient descent on nonconvex objectives (18; 14; 17; 11). General analysis of first-and second-order methods on such problems can provide guarantees for convergence to critical points but these may be highly suboptimal, since nonconvex optimization is in general an NP-hard probem BID3. Outside of a convex setting one must assume additional structure in order to make statements about convergence to optimal or high quality solutions. It is a curious fact that for certain classes of problems such as ones that involve sparsification (25; 6) or matrix/tensor recovery (21; 19; 1) first-order methods can be used effectively. Even for some highly nonconvex problems where there is no ground truth available such as the training of neural networks first-order methods converge to high-quality solutions.Dictionary learning is a problem of inferring a sparse representation of data that was originally developed in the neuroscience literature, and has since seen a number of important applications including image denoising, compressive signal acquisition and signal classification (13; 26). In this work we study a formulation of the dictionary learning problem that can be solved efficiently using randomly initialized gradient descent despite possessing a number of saddle points exponential in the dimension. A feature that appears to enable efficient optimization is the existence of sufficient negative curvature in the directions normal to the stable manifolds of all critical points that are not global minima BID0. This property ensures that the regions of the space that feed into small gradient regions under gradient flow do not dominate the parameter space. FIG0 illustrates the value of this property: negative curvature prevents measure from concentrating about the stable manifold. As a consequence randomly initialized gradient methods avoid the "slow region" of around the saddle point. Negative curvature helps gradient descent. Red: "slow region" of small gradient around a saddle point. Green: stable manifold associated with the saddle point. Black: points that flow to the slow region. Left: global negative curvature normal to the stable manifold. Right: positive curvature normal to the stable manifold -randomly initialized gradient descent is more likely to encounter the slow region. The main of this work is a convergence rate for randomly initialized gradient descent for complete orthogonal dictionary learning to the neighborhood of a global minimum of the objective. Our are probabilistic since they rely on initialization in certain regions of the parameter space, yet they allow one to flexibly trade off between the maximal number of iterations in the bound and the probability of the bound holding. While our focus is on dictionary learning, it has been recently shown that for other important nonconvex problems such as phase retrieval BID7 performance guarantees for randomly initialized gradient descent can be obtained as well. In fact, in Appendix C we show that negative curvature normal to the stable manifolds of saddle points (illustrated in FIG0) is also a feature of the population objective of generalized phase retrieval, and can be used to obtain an efficient convergence rate. Easy nonconvex problems. There are two basic impediments to solving nonconvex problems globally: (i) spurious local minimizers, and (ii) flat saddle points, which can cause methods to stagnate in the vicinity of critical points that are not minimizers. The latter difficulty has motivated the study of strict saddle functions (36; BID13, which have the property that at every point in the domain of optimization, there is a large gradient, a direction of strict negative curvature, or the function is strongly convex. By leveraging this curvature information, it is possible to escape saddle points and obtain a local minimizer in polynomial time.2 Perhaps more surprisingly, many known strict saddle functions also have the property that every local minimizer is global; for these problems, this implies that efficient methods find global solutions. Examples of problems with this property include variants of sparse dictionary learning, phase retrieval, tensor decomposition BID13, community detection and phase synchronization BID4.Minimizing strict saddle functions. Strict saddle functions have the property that at every saddle point there is a direction of strict negative curvature. A natural approach to escape such saddle points is to use second order methods (e.g., trust region BID8 or curvilinear search BID14) that explicitly leverage curvature information. Alternatively, one can attempt to escape saddle points using first order information only. However, some care is needed: canonical first order methods such as gradient descent will not obtain minimizers if initialized at a saddle point (or at a point that flows to one) -at any critical point, gradient descent simply stops. A natural remedy is to randomly perturb the iterate whenever needed. A line of recent works shows that noisy gradient methods of this form efficiently optimize strict saddle functions (24; 12; 20). For example, obtains rates on strict saddle functions that match the optimal rates for smooth convex programs up to a polylogarithmic dependence on dimension. Randomly initialized gradient descent? The aforementioned are broad, and nearly optimal. Nevertheless, important questions about the behavior of first order methods for nonconvex optimization remain unanswered. For example: in every one of the aforemented benign nonconvex optimization problems, randomly initialized gradient descent rapidly obtains a minimizer. This may seem unsurprising: general considerations indicate that the stable manifolds associated with non-minimizing critical points have measure zero, this implies that a variety of small-stepping first order methods converge to minimizers in the large-time limit. However, it is not difficult to construct strict saddle problems that are not amenable to efficient optimization by randomly initialized gradient descent -see BID11 for an example. This contrast between the excellent empirical performance of randomly initialized first order methods and worst case examples suggests that there are important geometric and/or topological properties of "easy nonconvex problems" that are not captured by the strict saddle hypothesis. Hence, the motivation of this paper is twofold: (i) to provide theoretical corroboration (in certain specific situations) for what is arguably the simplest, most natural, and most widely used first order method, and (ii) to contribute to the ongoing effort to identify conditions which make nonconvex problems amenable to efficient optimization. Suppose we are given data matrix Y = y 1,... y p ∈ R n×p. The dictionary learning problem asks us to find a concise representation of the data BID12, of the form Y ≈ AX, where X is a sparse matrix. In the complete, orthogonal dictionary learning problem, we restrict the matrix A to have orthonormal columns (A ∈ O(n)). This variation of dictionary learning is useful for finding concise representations of small datasets (e.g., patches from a single image, in MRI FORMULA2).To analyze the behavior of dictionary learning algorithms theoretically, it useful to posit that Y = A 0 X 0 for some true dictionary A 0 ∈ O(n) and sparse coefficient matrix X 0 ∈ R n×p, and ask whether a given algorithm recovers the pair (A 0, X 0). BID3 In this work, we further assume that the sparse matrix X 0 is random, with entries i.i.d. Bernoulli-Gaussian 5. For simplicity, we will let A 0 = I; our arguments extend directly to general A 0 via the simple change of variables q → A * 0 q. showed that under mild conditions, the complete dictionary recovery problem can be reduced to the geometric problem of finding a sparse vector in a linear subspace. Notice that because A 0 is orthogonal, row(Y) = row(X 0). Because X 0 is a sparse random matrix, the rows of X 0 are sparse vectors. Under mild conditions, they are the sparsest vectors in the row space of Y, and hence can be recovered by solving the conceptual optimization problem min q DISPLAYFORM0 This is not a well-structured optimization problem: the objective is discontinuous, and the constraint set is open. A natural remedy is to replace the 0 norm with a smooth sparsity surrogate, and to break the scale ambiguity by constraining q to the sphere, giving DISPLAYFORM1 Here, we choose h µ (t) = µ log(cosh(t/µ)) as a smooth sparsity surrogate. This objective was analyzed in, which showed that (i) although this optimization problem is nonconvex, when the data are sufficiently large, with high probability every local optimizer is near a signed column of the true dictionary A 0, (ii) every other critical point has a direction of strict negative curvature, and (iii) as a consequence, a second-order Riemannian trust region method efficiently recovers a column of A 0. BID5 The Riemannian trust region method is of mostly theoretical interest: it solves complicated (albeit polynomial time) subproblems that involve the Hessian of f DL. Note the similarity to the dictionary learning objective. Right: The objective for complete orthogonal dictionary learning (discussed in section 6) for n = 3.In practice, simple iterative methods, including randomly initialized gradient descent are also observed to rapidly obtain high-quality solutions. In the sequel, we will give a geometric explanation for this phenomenon, and bound the rate of convergence of randomly initialized gradient descent to the neighborhood of a column of A 0. Our analysis of f DL is probabilistic in nature: it argues that with high probability in the sparse matrix X 0, randomly initialized gradient descent rapidly produces a minimizer. To isolate more clearly the key intuitions behind this analysis, we first analyze the simpler separable objective DISPLAYFORM2 Figure 2 plots both f Sep and f DL as functions over the sphere. Notice that many of the key geometric features in f DL are present in f Sep; indeed, f Sep can be seen as an "ultrasparse" version of f DL in which the columns of the true sparse matrix X 0 are taken to have only one nonzero entry. A virtue of this model function is that its critical points and their stable manifolds have simple closed form expressions (see Lemma 1). Our problems of interest have the form DISPLAYFORM0 where f: R n → R is a smooth function. We let ∇f (q) and ∇ 2 f (q) denote the Euclidean gradient and hessian (over R n), and let grad [f] (q) and Hess [f] (q) denote their Riemannian counterparts (over S n−1). We will obtain for Riemannian gradient descent defined by the update DISPLAYFORM1 for some step size η > 0, where exp q: T q S n−1 → S n−1 is the exponential map. The Riemannian gradient on the sphere is given by grad[f] (q) = (I − qq *)∇f (q).We let A denote the set of critical points of f over S n−1 -these are the pointsq s.t. grad [f] (q) = 0. We letȂ denote the set of local minimizers, and "A its complement. Both f Sep and f DL are Morse functions on S n−1, 7 we can assign an index α to everyq ∈ A, which is the number of negative eigenvalues of Hess [f] (q).Our goal is to understand when gradient descent efficiently converges to a local minimizer. In the small-step limit, gradient descent follows gradient flow lines γ: R → M, which are solution curves of the ordinary differential equatioṅ DISPLAYFORM2 To each critical point α ∈ A of index λ, there is an associated stable manifold of dimension dim(M) − λ, which is roughly speaking, the set of points that flow to α under gradient flow: DISPLAYFORM3 7 Strictly speaking, fDL is Morse with high probability, due to of. Negative curvature and efficient gradient descent. The union of the light blue, orange and yellow sets is the set C. In the light blue region, there is negative curvature normal to ∂C, while in the orange region the gradient norm is large, as illustrated by the arrows. There is a single global minimizer in the yellow region. For the separable objective, the stable manifolds of the saddles and maximizers all lie on ∂C (the black circles denote the critical points, which are either maximizers " ", saddles " ", or minimizers " "). The red dots denote ∂C ζ with ζ = 0.2.Our analysis uses the following convenient coordinate chart DISPLAYFORM0 where w ∈ B 1. We also define two useful sets: DISPLAYFORM1 Since the problems considered here are symmetric with respect to a signed permutation of the coordinates we can consider a certain C and the will hold for the other symmetric sections as well. We will show that at every point in C aside from a neighborhood of a global minimizer for the separable objective (or a solution to the dictionary problem that may only be a local minimizer), there is either a large gradient component in the direction of the minimizer or negative curvature in a direction normal to ∂C. For the case of the separable objective, one can show that the stable manifolds of the saddles lie on this boundary, and hence this curvature is normal to the stable manifolds of the saddles and allows rapid progress away from small gradient regions and towards a global minimizer BID7. These regions are depicted in Figure 3.In the sequel, we will make the above ideas precise for the two specific nonconvex optimization problems discussed in Section 3 and use this to obtain a convergence rate to a neighborhood of a global minimizer. Our analysis are specific to these problems. However, as we will describe in more detail later, they hinge on important geometric characteristics of these problems which make them amenable to efficient optimization, which may obtain in much broader classes of problems. In this section, we study the behavior of randomly initialized gradient descent on the separable function f Sep. We begin by characterizing the critical points:Lemma 1 (Critical points of f Sep). The critical points of the separable problem are DISPLAYFORM0 For every α ∈ A and corresponding a(α), for µ < c √ n log n the stable manifold of α takes the form DISPLAYFORM1 where c > 0 is a numerical constant. Proof. Please see Appendix ABy inspecting the dimension of the stable manifolds, it is easy to verify that that there are 2n global minimizers at the 1-sparse vectors on the sphere ± e i, 2 n maximizers at the least sparse vectors and an exponential number of saddle points of intermediate sparsity. This is because the dimension of W s (α) is simply the dimension of b in 6, and it follows directly from the stable manifold theorem that only minimizers will have a stable manifold of dimension n − 1. The objective thus possesses no spurious local minimizers. When referring to critical points and stable manifolds from now on we refer only to those that are contained in C or on its boundary. It is evident from Lemma 1 that the critical points in "A all lie on ∂C and that α∈ " A W s (α) = ∂C, and there is a minimizer at its center given by q = e n. We now turn to making precise the notion that negative curvature normal to stable manifolds of saddle points enables gradient descent to rapidly exit small gradient regions. We do this by defining vector fields u (i) (q), i ∈ [n − 1] such that each field is normal to a continuous piece of ∂C ζ and points outwards relative to C ζ defined in 4. By showing that the Riemannian gradient projected in this direction is positive and proportional to ζ, we are then able to show that gradient descent acts to increase ζ(q(w)) = qn w ∞ − 1 geometrically. This corresponds to the behavior illustrated in the light blue region in Figure 3. DISPLAYFORM0 DISPLAYFORM1 where c > 0 is a numerical constant. Proof. Please see Appendix A.Since we will use this property of the gradient in C ζ to derive a convergence rate, we will be interested in bounding the probability that gradient descent initialized randomly with respect to a uniform measure on the sphere is initialized in C ζ. This will require bounding the volume of this set, which is done in the following lemma: DISPLAYFORM2 Proof. Please see Appendix D.3. Using the above, one can obtain the following convergence rate:Theorem 1 (Gradient descent convergence rate for separable function). For any 0 < ζ 0 < 1, r > µ log on the separable objective with µ < c2 √ n log n, enters an L ∞ ball of radius r around a global minimizer in T < C η √ n r 2 + log 1 ζ 0 iterations with probability DISPLAYFORM0 Proof. Please see Appendix A.We have thus obtained a convergence rate for gradient descent that relies on the negative curvature around the stable manifolds of the saddles to rapidly move from these regions of the space towards the vicinity of a global minimizer. This is evinced by the logarithmic dependence of the rate on ζ. As was shown for orthogonal dictionary learning in, we also expect a linear convergence rate due to strong convexity in the neighborhood of a minimizer, but do not take this into account in the current analysis. The proofs in this section will be along the same lines as those of Section 5. While we will not describe the positions of the critical points explicitly, the similarity between this objective and the separable function motivates a similar argument. It will be shown that initialization in some C ζ will guarantee that Riemannian gradient descent makes uniform progress in function value until reaching the neighborhood of a global minimizer. We will first consider the population objective which corresponds to the infinite data limit DISPLAYFORM0 and then bounding the finite sample size fluctuations of the relevant quantities. We begin with a lemma analogous to Lemma 2:Lemma 4 (Dictionary learning population gradient). For w ∈ C ζ, r < |w i |, µ < c 1 r DISPLAYFORM1 √ ζ the dictionary learning population objective 8 obeys DISPLAYFORM2 where c θ depends only on θ, c 1 is a positive numerical constant and u (i) is defined in 7.Proof. Please see Appendix BUsing this , we obtain the desired convergence rate for the population objective, presented in Lemma 11 in Appendix B. After accounting for finite sample size fluctuations in the gradient, one obtains a rate of convergence to the neighborhood of a solution (which is some signed basis vector due to our choice A 0 = I) Theorem 2 (Gradient descent convergence rate for dictionary learning). DISPLAYFORM3, Riemannian gradient descent with step size η < c5θs n log np on the dictionary learning objective 1 with µ < c6 √ ζ0 DISPLAYFORM4, enters a ball of radius c 3 s from a target solution in DISPLAYFORM5 iterations with probability DISPLAYFORM6 where y = DISPLAYFORM7, P y is given in Lemma 10 and c i, C i are positive constants. Proof. Please see Appendix BThe two terms in the rate correspond to an initial geometric increase in the distance from the set containing the small gradient regions around saddle points, followed by convergence to the vicinity of a minimizer in a region where the gradient norm is large. The latter is based on on the geometry of this objective provided in. The above analysis suggests that second-order properties -namely negative curvature normal to the stable manifolds of saddle points -play an important role in the success of randomly initialized gradient descent in the solution of complete orthogonal dictionary learning. This was done by furnishing a convergence rate guarantee that holds when the random initialization is not in regions that feed into small gradient regions around saddle points, and bounding the probability of such an initialization. In Appendix C we provide an additional example of a nonconvex problem that for which an efficient rate can be obtained based on an analysis that relies on negative curvature normal to stable manifolds of saddles -generalized phase retrieval. An interesting direction of further work is to more precisely characterize the class of functions that share this feature. The effect of curvature can be seen in the dependence of the maximal number of iterations T on the parameter ζ 0. This parameter controlled the volume of regions where initialization would lead to slow progress and the failure probability of the bound 1 − P was linear in ζ 0, while T depended logarithmically on ζ 0. This logarithmic dependence is due to a geometric increase in the distance from the stable manifolds of the saddles during gradient descent, which is a consequence of negative curvature. Note that the choice of ζ 0 allows one to flexibly trade off between T and 1 − P. By decreasing ζ 0, the bound holds with higher probability, at the price of an increase in T. This is because the volume of acceptable initializations now contains regions of smaller minimal gradient norm. In a sense, the is an extrapolation of works such as that analyze the ζ 0 = 0 case to finite ζ 0.Our analysis uses precise knowledge of the location of the stable manifolds of saddle points. For less symmetric problems, including variants of sparse blind deconvolution and overcomplete tensor decomposition, there is no closed form expression for the stable manifolds. However, it is still possible to coarsely localize them in regions containing negative curvature. Understanding the implications of this geometric structure for randomly initialized first-order methods is an important direction for future work. One may hope that studying simple model problems and identifying structures (here, negative curvature orthogonal to the stable manifold) that enable efficient optimization will inspire approaches to broader classes of problems. One problem of obvious interest is the training of deep neural networks for classification, which shares certain high-level features with the problems discussed in this paper. The objective is also highly nonconvex and is conjectured to contain a proliferation of saddle points BID10, yet these appear to be avoided by first-order methods BID15 for reasons that are still quite poorly understood beyond the two-layer case. Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. Low-rank matrix completion using alternating minimization. DISPLAYFORM0. Thus critical points are ones where either tanh(q µ) = 0 (which cannot happen on S n−1) or tanh(q µ) is in the nullspace of (I − qq *), which implies tanh(q µ) = cq for some constant b. The equation tanh(x µ) = bx has either a single solution at the origin or 3 solutions at {0, ±r(b)} for some r(b). Since this equation must be solves simultaneously for every element of q, we obtain ∀i ∈ [n]: q i ∈ {0, ±r(b)}. To obtain solutions on the sphere, one then uses the freedom we have in choosing b (and thus r(b)) such that q = 1. The ing set of critical points is thus DISPLAYFORM1 To prove the form of the stable manifolds, we first show that for q i such that |q i | = q ∞ and any q j such that |q j | + ∆ = |q i | and sufficiently small ∆ > 0, we have DISPLAYFORM2 For ease of notation we now assume q i, q j > 0 and hence ∆ = q i − q j, otherwise the argument can be repeated exactly with absolute values instead. The above inequality can then be written as DISPLAYFORM3 If we now define DISPLAYFORM4 where the O(∆ 2) term is bounded. Defining a vector r ∈ R n by DISPLAYFORM5 we have r 2 = 1. Since tanh(x) is concave for x > 0, and |r i | ≤ 1, we find DISPLAYFORM6 From DISPLAYFORM7 and thus q j ≥ 1 √ n − ∆. Using this inequality and properties of the hyperbolic secant we obtain DISPLAYFORM8 and plugging in µ = c √ n log n for some c < 1 DISPLAYFORM9 log n + log log n + log 4).We can bound this quantity by a constant, say h 2 ≤ 1 2, by requiring DISPLAYFORM10 ) log n + log log n ≤ − log 8and for and c < 1, using − log n + log log n < 0 we have DISPLAYFORM11 Since ∆ can be taken arbitrarily small, it is clear that c can be chosen in an n-independent manner such that A ≤ − log 8. We then find DISPLAYFORM12 since this inequality is strict, ∆ can be chosen small enough such that O(∆ 2) < ∆(h 1 − h 2) and hence h > 0, proving 9.It follows that under negative gradient flow, a point with |q j | < ||q|| ∞ cannot flow to a point q such that |q j | = ||q || ∞. From the form of the critical points, for every such j, q must thus flow to a point such that q j = 0 (the value of the j coordinate cannot pass through 0 to a point where |q j | = ||q || ∞ since from smoothness of the objective this would require passing some q with q j = 0, at which point grad [f Sep] (q) j = 0).As for the maximal magnitude coordinates, if there is more than one coordinate satisfying |q i1 | = |q i2 | = q ∞, it is clear from symmetry that at any subsequent point q along the gradient flow line q i1 = q i2. These coordinates cannot change sign since from the smoothness of the objective this would require that they pass through a point where they have magnitude smaller than 1/ √ n, at which point some other coordinate must have a larger magnitude (in order not to violate the spherical constraint), contradicting the above for non-maximal elements. It follows that the sign pattern of these elements is preserved during the flow. Thus there is a single critical point to which any q can flow, and this is given by setting all the coordinates with |q j | < q ∞ to 0 and multiplying the remaining coordinates by a positive constant to ensure the ing vector is on S n. Denoting this critical point by α, there is a vector b such that q = P S n−1 [a(α) + b] and supp(a(α)) ∩ supp(b) = ∅, b ∞ < 1 with the form of a(α) given by 5. The collection of all such points defines the stable manifold of α. Proof of Lemma 2: (Separable objective gradient projection). i) We consider the sign(w i) = 1 case; the sign(w i) = −1 case follows directly. Recalling that DISPLAYFORM13 qn, we first prove DISPLAYFORM14 for some c > 0 whose form will be determined later. The inequality clearly holds for w i = q n.To DISPLAYFORM15 verify that it holds for smaller values of w i as well, we now show that ∂ ∂w i tanh w i µ − tanh q n µ w i q n − c(q n − w i) < 0 which will ensure that it holds for all w i. We define s 2 = 1 − ||w|| 2 + w 2 i and denote q n = s 2 − w 2 i to extract the w i dependence, givingWhere in the last inequality we used properties of the sech function and q n ≥ w i. We thus want to show DISPLAYFORM16 and it follows that 10 holds. For µ < 1 BID15 we are guaranteed that c > 0.From examining the RHS of 10 (and plugging in q n = s 2 − w 2 i) we see that any lower bound on the gradient of an element w j applies also to any element |w i | ≤ |w j |. Since for |w j | = ||w|| ∞ we have q n − w j = w j ζ, for every log(1 µ)µ ≤ w i we obtain the bound DISPLAYFORM17 Proof of Theorem 1: (Gradient descent convergence rate for separable function).We obtain a convergence rate by first bounding the number of iterations of Riemannian gradient descent in C ζ0 \C 1, and then considering DISPLAYFORM18. Choosing c 2 so that µ < 1 2, we can apply Lemma 2, and for u defined in 7, we thus have DISPLAYFORM19 Since from Lemma 7 the Riemannian gradient norm is bounded by √ n, we can choose c 1, c 2 such that µ log(DISPLAYFORM20 . This choice of η then satisfies the conditions of Lemma 17 with r = µ log( DISPLAYFORM21, M = √ n, which gives that after a gradient step DISPLAYFORM22 for some suitably chosenc > 0. If we now define by w (t) the t-th iterate of Riemannian gradient descent and DISPLAYFORM23 and the number of iterations required to exit C ζ0 \C 1 is DISPLAYFORM24 To bound the remaining iterations, we use Lemma 2 to obtain that for every w ∈ C ζ0 \B ∞ r, DISPLAYFORM25 where we have used ||u DISPLAYFORM26 We thus have DISPLAYFORM27 Choosing DISPLAYFORM28 where L is the gradient Lipschitz constant of f s, from Lemma 5 we obtain DISPLAYFORM29 According to Lemma B, L = 1/µ and thus the above holds if we demand η < µ 2. Combining 12 and 13 gives DISPLAYFORM30.To obtain the final rate, we use in g(w 0) − g * ≤ √ n andcη < 1 ⇒ 1 log(1+cη) <C cη for somẽ C > 0. Thus one can choose C > 0 such that DISPLAYFORM31 From Lemma 1 the ball B ∞ r contains a global minimizer of the objective, located at the origin. The probability of initializing in Ȃ C ζ0 is simply given from Lemma 3 and by summing over the 2n possible choices of C ζ0, one for each global minimizer (corresponding to a single signed basis vector)., where L is a lipschitz constant for ∇f (q), one has DISPLAYFORM0 Proof. Just as in the euclidean setting, we can obtain a lower bound on progress in function values of iterates of the Riemannian gradient descent algorithm from a lower bound on the Riemannian gradient. Consider f: S n−1 → R, which has L-lipschitz gradient. Let q k denote the current iterate of Riemannian gradient descent, and let t k > 0 denote the step size. Then we can form the Taylor approximation to f • Exp q k (v) at 0 q k: DISPLAYFORM1 where the matrix norm is the operator norm on R n×n. Using the gradient-lipschitz property of f, we readily compute DISPLAYFORM0 since ∇f = 0 and q k ∈ S n−1. We thus have DISPLAYFORM1 If we put v = −t k grad[f](q k) and write q k+1 = Exp q k (−t k grad [f] (q k)), the previous expression becomes DISPLAYFORM2. Thus progress in objective value is guaranteed by lower-bounding the Riemannian gradient. As in the euclidean setting, summing the previous expression over iterations k now yields DISPLAYFORM3 Plugging in a constant step size gives the desired . Lemma 6 (Lipschitz constant of ∇f). For any x 1, x 2 ∈ R n, it holds DISPLAYFORM4 Proof. It will be enough to study a single coordinate function of ∇f. Using a derivative given in section D.1, we have for DISPLAYFORM5 A bound on the magnitude of the derivative of this smooth function implies a lipschitz constant for x → tanh(x/µ). To find the bound, we differentiate again and find the critical points of the function. We have, using the chain rule, d dx DISPLAYFORM6 (e x/µ + e −x/µ) 3. The denominator of this final expression vanishes nowhere. Hence, the only critical point satisfies x/µ = −x/µ, which implies x = 0. Therefore it holds DISPLAYFORM7 which shows that tanh(x/µ) is (1/µ)-lipschitz. Now let x 1 and x 2 be any two points of R n. Then one has DISPLAYFORM8 completing the proof. Proof of Lemma 4:(Dictionary learning population gradient). For simplicity we consider the case sign(w i) = 1. The converse follows by a similar argument. We have DISPLAYFORM0 Following the notation of, we write x j = b j v j where b j ∼ Bern(θ), v j ∼ N and denote the vectors of these variables by J, v respectively. Defining DISPLAYFORM1 and similarly the second term in 15 is, with DISPLAYFORM0 We already have a lower bound in Lemma 20 of that we can use for the second term, so we need an upper bound for the first term. Following from p. 865, we define DISPLAYFORM0, and defining DISPLAYFORM1 Where b k = (−β) k (k + 1). Using B.3 from Lemma 40 in we have DISPLAYFORM2 Where Φ c (x) is the complementary Gaussian CDF (The exchange of summation and expectation is justified since Y > 0 implies Z ∈, see proof of Lemma 18 in for details). Using the following bounds DISPLAYFORM3 2 /2 by applying the upper (lower) bound to the even (odd) terms in the sum, and then adding a non-negative quantity, we obtain DISPLAYFORM4 and using Lemma 17 in ) and taking T → ∞ so that β → 1 we have DISPLAYFORM5 DISPLAYFORM6 giving the upper bound DISPLAYFORM7 while the lower bound (Lemma 20 in) is DISPLAYFORM8 After conditioning on J \{n, i} the variables X + q n v n, X + q i v i are Gaussian. We can thus plug the bounds into 16 to obtain DISPLAYFORM0 the term in the expectation is positive since q n > ||w|| ∞ (1 + ζ) > w i giving DISPLAYFORM1 To extract the ζ dependence we plug in q n > w i (1 + ζ) and develop to first order in ζ (since the ing function of ζ is convex) giving DISPLAYFORM2 Given some ζ and r such that w i > r, if we now choose µ such that µ < Lemma 8 (Point-wise concentration of projected gradient). For u (i) defined in 7, the gradient of the objective 1 obeys DISPLAYFORM3 Proof of Lemma 8: (Point-wise concentration of projected gradient). If we denote by x i a column of the data matrix with entries x i j ∼ BG(θ), we have DISPLAYFORM4. Since tanh(x) is bounded by 1, DISPLAYFORM5 Invoking Lemma 21 from and u 2 = 1 + DISPLAYFORM6 and using Lemma 36 in with R = √ 2, σ = √ 2 we have DISPLAYFORM7 Lemma 9 (Projection Lipschitz Constant). The Lipschitz constant for DISPLAYFORM8 Proof of Lemma 9: (Projection Lipschitz Constant). We have DISPLAYFORM9 where we have defined DISPLAYFORM10 We also use the fact that tanh is bounded by 1 and s(w) is bounded by X ∞. We can then use Lemma 23 in to obtain DISPLAYFORM11 Lemma 10 (Uniformized gradient fluctuations). For all w ∈ C ζ, i ∈ [n], with probability P > P y we have DISPLAYFORM12 where DISPLAYFORM13 Proof of Lemma 10:(Uniformized gradient fluctuations). For X ∈ R n×p with i.i.d. BG(θ) entries, we define the event E ∞ ≡ {1 ≤ X ∞ ≤ 4 log(np)}. We have DISPLAYFORM14 For any ε ∈ we can construct an ε-net N for C ζ \B 2 1/20 DISPLAYFORM15. If we choose ε = y(θ,ζ) DISPLAYFORM16 We then denote by E g the event DISPLAYFORM17 2 in the of Lemma 8 gives that for all w ∈ C ζ, i ∈ [n], DISPLAYFORM18 py(θ, ζ) Proof of Lemma 11: (Gradient descent convergence rate for dictionary learning -population). The rate will be obtained by splitting C ζ0 into three regions. We consider convergence to B 2 s since this set contains a global minimizer. Note that the balls in the proof are defined with respect to w. DISPLAYFORM19 The analysis in this region is completely analogous to that in the first part of the proof of Lemma 1. For every point in this set we have DISPLAYFORM20. From Lemma 16 we know that, since for every point in this region r 3 ζ < 1, we have DISPLAYFORM21 DISPLAYFORM22 r = z(r, ζ) and we thus demand µ < √ ζ0 DISPLAYFORM23 and obtain from Lemma 4 that for |w i | > r DISPLAYFORM24. We now require η <, M = √ θn (since the maximal norm of the Riemannian gradient is √ θn from Lemma 12), obtaining that at every iteration in this region ζ ≥ ζ 1 + √ nc DL 2(8000(n − 1)) 3/2 η and the maximal number of iterations required to obtain ζ > 8 and exit this region is given by DISPLAYFORM25 According to Proposition 7 in, which we can apply since s ≥ DISPLAYFORM26. Defining h(q) = w 2 2, and denoting by q an update of Riemannian gradient descent with step size η, we have (using a Lagrange remainder term) DISPLAYFORM27 where in the last line we used q = cos(gη)q − sin(gη) DISPLAYFORM28 we obtain (using 18) DISPLAYFORM29 and thus choosing η < DISPLAYFORM30 we find DISPLAYFORM31 Under review as a conference paper at ICLR 2019 and in our region of interest w 2 < w 2 −csθη for somec > 0 and thus summing over iterations, we obtain for someC 2 > 0 DISPLAYFORM32 From Lemma 12, M = √ θn and thus with a suitably chosen c 2 > 0, η < c2s n satisfies the above requirement on η as well as the previous requirements, since θ < 1. Combining these gives, we find that when initializing in C ζ0, the maximal number of iterations required for Riemannian gradient descent to enter B 2 s is DISPLAYFORM0 for some suitably chosen C 1, where t 1, t 2 are given in 17,19. The probability of such an initialization is given by the probability of initializing in one of the 2n possible choices of C ζ, which is bounded in Lemma 3.Once w ∈ B 2 s, the distance in R n−1 between w and a solution to the problem (which is a signed basis vector, given by the point w = 0 or an analog on a different symmetric section of the sphere) is no larger than s, which in turn implies that the Riemannian distance between ϕ(w) and a solution is no larger than c 3 s for some c 3 > 0. We note that the conditions on µ can be satisfied by requiring µ < DISPLAYFORM1 where X is the data matrix with i.i.d. BG(θ) entries. Proof. Denoting x ≡ (x, x n) we have DISPLAYFORM2 and using Jensen's inequality, convexity of the L 2 norm and the triangle inequality to obtain DISPLAYFORM3 Similarly, in the finite sample size case one obtains DISPLAYFORM4 Proof of Theorem 2: (Gradient descent convergence rate for dictionary learning). The proof will follow exactly that of Lemma 11, with the finite sample size fluctuations decreasing the guaranteed change in ζ or ||w|| at every iteration (for the initial and final stages respectively) which will adversely affect the bounds. DISPLAYFORM5 To control the fluctuations in the gradient projection, we choose DISPLAYFORM6 which can be satisfied by choosing y(θ, ζ 0) = DISPLAYFORM7 for an appropriate c 7 > 0. According to Lemma 10, with probability greater than P y we then have DISPLAYFORM8 With the same condition on µ as in Lemma 11, combined with the uniformized bound on finite sample fluctuations, we have that at every point in this set DISPLAYFORM9. According to Lemma 12 the Riemannian gradient norm is bounded by M = √ n X ∞. Choosing r, b as in Lemma 11, we require η < for some chosenc > 0. We then obtain DISPLAYFORM10 for a suitably chosen C 2 > 0. The final bound on the rate is obtained by summing over the terms for the three regions as in the population case, and convergence is again to a distance of less than c 3 s from a local minimizer. The probability of achieving this rate is obtained by taking a union bound over the probability of initialization in C ζ0 (given in Lemma 3) and the probabilities of the bounds on the gradient fluctuations holding (from Lemma 10 and FORMULA7). Note that the fluctuation bound events imply by construction the event E ∞ = {1 ≤ X ∞ ≤ 4 log(np)} hence we can replace X ∞ in the conditions on η above by 4 log(np). The conditions on η, µ can be satisfied by requiring η < c5θs n log np, µ < c6 √ ζ0 n 5/4 for suitably chosen c 5, c 6 > 0. The bound on the number of iterations can be simplified to the form in the theorem statement as in the population case. We show below that negative curvature normal to stable manifolds of saddle points in strict saddle functions is a feature that is found not only in dictionary learning, and can be used to obtain efficient convergence rates for other nonconvex problems as well, by presenting an analysis of generalized phase retrieval that is along similar lines to the dictionary learning analysis. We stress that this contribution is not novel since a more thorough analysis was carried out by BID7. The ing rates are also suboptimal, and pertain only to the population objective. Generalized phase retrieval is the problem of recovering a vector x ∈ C n given a set of magnitudes of projections y k = |x * a k | onto a known set of vectors a k ∈ C n. It arises in numerous domains including microscopy, acoustics BID1, and quantum mechanics (see for a review). Clearly x can only be recovered up to a global phase. We consider the setting where the elements of every a k are i. DISPLAYFORM0 We analyze the least squares formulation of the problem given by DISPLAYFORM1 Taking the expectation (large p limit) of the above objective and organizing its derivatives using Wirtinger calculus FORMULA2, we obtain DISPLAYFORM2 For the remainder of this section, we analyze this objective, leaving the consideration of finite sample size effects to future work. In it was shown that aside from the manifold of minimȃ DISPLAYFORM0 the only critical points of E[f] are a maximum at z = 0 and a manifold of saddle points given by DISPLAYFORM1 where W ≡ {z|z * x = 0}. We decompose z as DISPLAYFORM2 where ζ > 0, w ∈ W. This gives z 2 = w 2 + ζ 2. The choice of w, ζ, φ is unique up to factors of 2π in φ, as can be seen by taking an inner product with x. Since the gradient decomposes as follows: DISPLAYFORM3 the directions e iφ x x, w w are unaffected by gradient descent and thus the problem reduces to a two-dimensional one in the space (ζ, w). Note also that the objective for this twodimensional problem is a Morse function, despite the fact that in the original space there was a manifold of saddle points. It is also clear from this decomposition of the gradient that the stable manifolds of the saddles are precisely the set W.It is evident from 24 that the dispersive property does not hold globally in this case. For z / ∈ B ||x|| we see that gradient descent will cause ζ to decrease, implying positive curvature normal to the stable manifolds of the saddles. This is a consequence of the global geometry of the objective. Despite this, in the region of the space that is more "interesting", namely B ||x||, we do observe the dispersive property, and can use it to obtain a convergence rate for gradient descent. We define a set that contains the regions that feeds into small gradient regions around saddle points within B ||x|| by DISPLAYFORM4 We will show that, as in the case of orthogonal dictionary learning, we can both bound the probability of initializing in (a subset of) the complement of Q ζ0 and obtain a rate for convergence of gradient descent in the case of such an initialization. plane. The full red curves are the boundaries between the sets S 1, S 2, S 3, S 4 used in the analysis. The dashed red line is the boundary of the set Q ζ0 that contains small gradient regions around critical points that are not minima. The maximizer and saddle point are shown in dark green, while the minimizer is in pink. These are used to find the change in ζ, w at every iteration in each region: DISPLAYFORM5 We now show that gradient descent initialized in S 1 \Q ζ0 cannot exit ∪ 2 we are guaranteed from Lemma 13 that at every iteration ζ ≥ ζ 0. Thus the region with ζ < ζ 0 can only be entered if gradient descent is initialized in it. It follows that initialization in S 1 \Q ζ0 rules out entering Q ζ0 at any future iteration of gradient descent. Since this guarantees that regions that feed into small gradient regions are avoided, an efficient convergence rate can again be obtained. Theorem 3 (Gradient descent convergence rate for generalized phase retrieval). Gradient descent on 22 with step size η < DISPLAYFORM0 iterations with probability DISPLAYFORM1 ii) Since only a step from S 4 can decrease ζ, we have that for the initial point z 2 > x 2.Combined with DISPLAYFORM2 this gives DISPLAYFORM3 and using the lower bound (1 − 2η x 2 c)ζ ≤ ζ we obtain DISPLAYFORM4 where in the last inequality we used c < DISPLAYFORM5 Proof of Lemma 14. We use the fact that for the next iterate we have DISPLAYFORM6 We will also repeatedly use η < DISPLAYFORM7 which is a shown in Lemma 13. DISPLAYFORM8 We want to show DISPLAYFORM9 (1 + c) x 2.1) We have z ∈ S 3 ⇒ z 2 = (1 − ε) x 2 for some ε ≤ c and using 28 we must show DISPLAYFORM10 Proof of Theorem 3: (Gradient descent convergence rate for generalized phase retrieval). We now bound the number of iterations that gradient descent, after random initialization in S 1, requires to reach a point where one of the convergence criteria detailed in Lemma 15 is fulfilled. From Lemma 14, we know that after initialization in S 1 we need to consider only the set DISPLAYFORM11 S i. The number of iterations in each set will be determined by the bounds on the change in ζ, ||w|| detailed in 27. Assuming we initialize with some ζ = ζ 0. Then the maximal number of iterations in this region is DISPLAYFORM0 since after this many iterations DISPLAYFORM1 The only concern is that after an iteration in S 3 ∪ S 4 the next iteration might be in S 2.To account for this situation, we find the maximal number of iterations required to reach S 3 ∪ S 4 again. This is obtained from the bound on ζ in Lemma 13.Using this , and the fact that for every iteration in S 2 we are guaranteed ζ ≥ (1 + 2η x 2 c)ζ the number of iterations required to reach S 3 ∪ S 4 again is given by DISPLAYFORM2 The final rate to convergence is DISPLAYFORM0 C.9 Probability of the bound holdingThe bound applies to an initialization with ζ ≥ ζ 0, hence in S 1 \Q ζ0. Assuming uniform initialization in S 1, the set Q ζ0 is simply a band of width 2ζ 0 around the equator of the ball B x / √ 2 (in R 2n, using the natural identification of C n with R 2n). This volume can be calculated by integrating over 2n − 1 dimensional balls of varying radius. DISPLAYFORM1 and by V (n) = π n/2 n 2 Γ(n 2) the hypersphere volume, the probability of initializing in S 1 ∩ Q ζ0 (and thus in a region that feeds into small gradient regions around saddle points) is DISPLAYFORM2. For small ζ we again find that P(fail) scales linearly with ζ, as was the case for the previous problems considered. Proof of Lemma 3: (Volume of C ζ). We are interested in the relative volume DISPLAYFORM0 Vol(S n−1) ≡ V ζ. Using the standard solid angle formula, it is given by DISPLAYFORM1 This integral admits no closed form solution but one can construct a linear approximation around small ζ and show that it is convex. Thus the approximation provides a lower bound for V ζ and an upper bound on the failure probability. From symmetry considerations the zero-order term is V 0 = 1 2n. The first-order term is given by DISPLAYFORM2 We now require an upper bound for the second integral since we are interested in a lower bound for V ζ. We can express it in terms of the second moment of the L ∞ norm of a Gaussian vector as follows: where µ(X) is the Gaussian measure on the vector X ∈ R n. We can bound the first term using Combining these bounds, the leading order behavior of the gradient is DISPLAYFORM3 This linear approximation is indeed a lower bound, since using integration by parts twice we have and this is the smallest L ∞ ball containing C ζ.Proof. Given the surface of some L ∞ ball for w, we can ask what is the minimal ζ such that ∂C ζm intersects this surface. This amounts to finding the minimal q n given some w ∞. Yet this is clearly obtained by setting all the coordinates of w to be equal to w ∞ (this is possible since we are guaranteed q n ≥ w ∞ ⇒ w ∞ ≤ where one instead maximizes q n with some fixed w ∞ .Given some surface of an L 2 ball, we can ask what is the minimal C ζ such that C ζ ⊆ B 2 r. This is equivalent to finding the maximal ζ M such that ∂C ζ M intersects the surface of the L 2 ball. Since q n is fixed, maximizing ζ is equivalent to minimizing w ∞. This is done by setting w ∞ = w √ n−1, which gives DISPLAYFORM4 The statement in the lemma follows from combining these .. If we now combine this with the fact that after a Riemannian gradient step cos(gη)q i − sin(gη) ≤ q i ≤ cos(gη)q i + sin(gη), the above condition on η implies the inequality (*), which in turn ensures that |w i | < r ⇒ |w i | < w ∞: |w i | < |w i | + sin(gη) < r + gη < (*)(1 − g 2 η 2)b − gη < cos(gη) w ∞ − sin(gη) ≤ w ∞ Due to the above analysis, it is evident that any w i such that |w i | = w ∞ obeys |w i | > r, from which it follows that we can use 31 to obtain q n w ∞ − 1 = ζ ≥ ζ 1 + √ n 2 ηc(w) | We provide an efficient convergence rate for gradient descent on the complete orthogonal dictionary learning objective based on a geometric analysis. | 605 | scitldr |
Coding theory is a central discipline underpinning wireline and wireless modems that are the workhorses of the information age. Progress in coding theory is largely driven by individual human ingenuity with sporadic breakthroughs over the past century. In this paper we study whether it is possible to automate the discovery of decoding algorithms via deep learning. We study a family of sequential codes parametrized by recurrent neural network (RNN) architectures. We show that cre- atively designed and trained RNN architectures can decode well known sequential codes such as the convolutional and turbo codes with close to optimal performance on the additive white Gaussian noise (AWGN) channel, which itself is achieved by breakthrough algorithms of our times (Viterbi and BCJR decoders, representing dynamic programing and forward-backward algorithms). We show strong gen- eralizations, i.e., we train at a specific signal to noise ratio and block length but test at a wide range of these quantities, as well as robustness and adaptivity to deviations from the AWGN setting. Reliable digital communication, both wireline (ethernet, cable and DSL modems) and wireless (cellular, satellite, deep space), is a primary workhorse of the modern information age. A critical aspect of reliable communication involves the design of codes that allow transmissions to be robustly (and computationally efficiently) decoded under noisy conditions. This is the discipline of coding theory; over the past century and especially the past 70 years (since the birth of information theory BID22) much progress has been made in the design of near optimal codes. Landmark codes include convolutional codes, turbo codes, low density parity check (LDPC) codes and, recently, polar codes. The impact on humanity is enormous -every cellular phone designed uses one of these codes, which feature in global cellular standards ranging from the 2nd generation to the 5th generation respectively, and are text book material BID16.The canonical setting is one of point-to-point reliable communication over the additive white Gaussian noise (AWGN) channel and performance of a code in this setting is its gold standard. The AWGN channel fits much of wireline and wireless communications although the front end of the receiver may have to be specifically designed before being processed by the decoder (example: intersymbol equalization in cable modems, beamforming and sphere decoding in multiple antenna wireless systems); again this is text book material BID26. There are two long term goals in coding theory: (a) design of new, computationally efficient, codes that improve the state of the art (probability of correct reception) over the AWGN setting. Since the current codes already operate close to the information theoretic "Shannon limit", the emphasis is on robustness and adaptability to deviations from the AWGN settings (a list of channel models motivated by practical settings, (such as urban, pedestrian, vehicular) in the recent 5th generation cellular standard is available in Annex B of 3GPP TS 36.101.) (b) design of new codes for multi-terminal (i.e., beyond point-to-point) settings -examples include the feedback channel, the relay channel and the interference channel. Progress over these long term goals has generally been driven by individual human ingenuity and, befittingly, is sporadic. For instance, the time duration between convolutional codes (2nd generation cellular standards) to polar codes (5th generation cellular standards) is over 4 decades. Deep learning is fast emerging as capable of learning sophisticated algorithms from observed data (input, action, output) alone and has been remarkably successful in a large variety of human endeavors (ranging from language BID11 to vision BID17 to playing Go BID23). Motivated by these successes, we envision that deep learning methods can play a crucial role in solving both the aforementioned goals of coding theory. While the learning framework is clear and there is virtually unlimited training data available, there are two main challenges: (a) The space of codes is very vast and the sizes astronomical; for instance a rate 1/2 code over 100 information bits involves designing 2 100 codewords in a 200 dimensional space. Computationally efficient encoding and decoding procedures are a must, apart from high reliability over the AWGN channel. (b) Generalization is highly desirable across block lengths and data rate that each work very well over a wide range of channel signal to noise ratios (SNR). In other words, one is looking to design a family of codes (parametrized by data rate and number of information bits) and their performance is evaluated over a range of channel SNRs. For example, it is shown that when a neural decoder is exposed to nearly 90% of the codewords of a rate 1/2 polar code over 8 information bits, its performance on the unseen codewords is poor. In part due to these challenges, recent deep learning works on decoding known codes using data-driven neural decoders have been limited to short or moderate block lengths BID4 BID13. Other deep learning works on coding theory focus on decoding known codes by training a neural decoder that is initialized with the existing decoding algorithm but is more general than the existing algorithm BID12 BID29. The main challenge is to restrict oneself to a class of codes that neural networks can naturally encode and decode. In this paper, we restrict ourselves to a class of sequential encoding and decoding schemes, of which convolutional and turbo codes are part of. These sequential coding schemes naturally meld with the family of recurrent neural network (RNN) architectures, which have recently seen large success in a wide variety of time-series tasks. The ancillary advantage of sequential schemes is that arbitrarily long information bits can be encoded and also at a large variety of coding rates. Working within sequential codes parametrized by RNN architectures, we make the following contributions. Focusing on convolutional codes we aim to decode them on the AWGN channel using RNN architectures. Efficient optimal decoding of convolutional codes has represented historically fundamental progress in the broad arena of algorithms; optimal bit error decoding is achieved by the'Viterbi decoder' BID27 which is simply dynamic programming or Dijkstra's algorithm on a specific graph (the 'trellis') induced by the convolutional code. Optimal block error decoding is the BCJR decoder BID0 which is part of a family of forward-backward algorithms. While early work had shown that vanilla-RNNs are capable in principle of emulating both Viterbi and BCJR decoders BID28 BID21 we show empirically, through a careful construction of RNN architectures and training methodology, that neural network decoding is possible at very near optimal performances (both bit error rate (BER) and block error rate (BLER)). The key point is that we train a RNN decoder at a specific SNR and over short information bit lengths (100 bits) and show strong generalization capabilities by testing over a wide range of SNR and block lengths (up to 10,000 bits). The specific training SNR is closely related to the Shannon limit of the AWGN channel at the rate of the code and provides strong information theoretic collateral to our empirical . Turbo codes are naturally built on top of convolutional codes, both in terms of encoding and decoding. A natural generalization of our RNN convolutional decoders allow us to decode turbo codes at BER comparable to, and at certain regimes, even better than state of the art turbo decoders on the AWGN channel. That data driven, SGD-learnt, RNN architectures can decode comparably is fairly remarkable since turbo codes already operate near the Shannon limit of reliable communication over the AWGN channel. We show the afore-described neural network decoders for both convolutional and turbo codes are robust to variations to the AWGN channel model. We consider a problem of contemporary interest: communication over a "bursty" AWGN channel (where a small fraction of noise has much higher variance than usual) which models inter-cell interference in OFDM cellular systems (used in 4G and 5G cellular standards) or co-channel radar interference. We demonstrate empirically the neural network architectures can adapt to such variations and beat state of the art heuristics comfortably (despite evidence elsewhere that neural network are sensitive to models they are trained on BID24). Via an innovative local perturbation analysis (akin to BID15)), we demonstrate the neural network to have learnt sophisticated preprocessing heuristics in engineering of real world systems BID10. Among diverse families of coding scheme available in the literature, sequential coding schemes are particularly attractive. They (a) are used extensively in mobile telephone standards including satellite communications, 3G, 4G, and LTE; (b) provably achieve performance close to the information theoretic limit; and (c) have a natural recurrent structure that is aligned with an established family of deep models, namely recurrent neural networks. We consider the basic sequential code known as convolutional codes, and provide a neural decoder that can be trained to achieve the optimal classification accuracy. A standard example of a convolutional code is the rate-1/2 Recursive Systematic Convolutional (RSC) code. The encoder performs a forward pass on a recurrent network shown in FIG0 on binary input sequence b = (b 1, . . ., b K), which we call message bits, with binary vector states (s 1, . . ., s K) and binary vector outputs (c 1, . . ., c K), which we call transmitted bits or a codeword. At time k with binary input b k ∈ {0, 1} and the state of a two-dimensional binary vector s k = (s k1, s k2), the output is a two-dimensional binary vector DISPLAYFORM0 2, where x ⊕ y = |x − y|. The state of the next cell is updated as DISPLAYFORM1 Initial state is assumed to be 0, i.e., s 1 =. The 2K output bits are sent over a noisy channel, with the canonical one being the AWGN channel: the received binary vectors y = (y 1, . . ., y K), which are called the received bits, are y ki = c ki + z ki for all k ∈ [K] and i ∈ {1, 2}, where z ki's are i.i.d. Gaussian with zero mean and variance σ 2. Decoding a received signal y refers to (attempting to) finding the maximum a posteriori (MAP) estimate. Due to the simple recurrent structure, efficient iterative schemes are available for finding the MAP estimate for convolutional codes (vit; BID0 . There are two MAP decoders depending on the error criterion in evaluating the performance: bit error rate (BER) or block error rate (BLER).BLER counts the fraction of blocks that are wrongly decoded (assuming many such length-K blocks have been transmitted), and matching optimal MAP estimator isb = arg max b Pr(b|y). Using dynamic programming, one can find the optimal MAP estimate in time linear in the block length K, which is called the Viterbi algorithm. BER counts the fraction of bits that are wrong, and matching optimal MAP estimator isb k = arg max b k Pr(b k |y), for all k = 1, · · ·, K. Again using dynamic programming, the optimal estimate can be computed in O(K) time, which is called the BCJR algorithm. In both cases, the linear time optimal decoder crucially depends on the recurrent structure of the encoder. This structure can be represented as a hidden Markov chain (HMM), and both decoders are special cases of general efficient methods to solve inference problems on HMM using the principle of dynamic programming (e.g. belief propagation). These methods efficiently compute the exact posterior distributions in two passes through the network: the forward pass and the backward pass. Our first aim is to train a (recurrent) neural network from samples, without explicitly specifying the underlying probabilistic model, and still recover the accuracy of the matching optimal decoders. At a high level, we want to prove by a constructive example that highly engineered dynamic programming can be matched by a neural network which only has access to the samples. The challenge lies in finding the right architecture and showing the right training examples. Neural decoder for convolutional codes. We treat the decoding problem as a K-dimensional binary classification problem for each of the message bits b k. The input to the decoder is a length-2K sequence of received bits y ∈ R 2K each associated with its length-K sequence of "true classes" b ∈ {0, 1} K. The goal is to train a model to find an accurate sequence-to-sequence classifier. The input y is a noisy version of the class b according to the rate-1/2 RSC code defined in earlier in this section. We generate N training examples (DISPLAYFORM2 according to this joint distribution to train our model. We introduce a novel neural decoder for rate-1/2 RSC codes, we call N-RSC. It is two layers of bi-direction Gated Recurrent Units (bi-GRU) each followed by batch normalization units, and the output layer is a single fully connected sigmoid unit. Let W denote all the parameters in the model whose dimensions are shown in FIG1, and f W (y) ∈ K denote the output sequence. The k-th output f W (y) k estimates the posterior probability Pr(b k = 1|y), and we train the weights W to minimize the L w error with respect to a choice of a loss function (·, ·) specified below: DISPLAYFORM3 As the encoder is a recurrent network, it is critical that we use recurrent neural networks as a building block. Among several options of designing RNNs, we make three specific choices that are crucial in achieving the target accuracy: (i) bidirectional GRU as a building block instead of unidirectional GRU; (ii) 2-layer architecture instead of a single layer; and (iii) using batch normalization. As we show in Table 1 in Appendix C, unidirectional GRU fails because the underlying dynamic program requires bi-directional recursion of both forward pass and backward pass through the received sequence. A single layer bi-GRU fails to give the desired performance, and two layers is sufficient. We show how the accuracy depends on the number of layer in Table 1 in Appendix C. Batch normalization is also critical in achieving the target accuracy. Training. We propose two novel training techniques that improve accuracy of the trained model significantly. First, we propose a novel loss function guided by the efficient dynamic programming, that significantly reduces the number of training example we need to show. A natural L 2 loss (which gives better accuracy than cross-entropy in our problem) would be DISPLAYFORM4 DISPLAYFORM5 Recall that the neural network estimates the posterior Pr(b k = 1|y (i) ), and the true label b DISPLAYFORM6 k is a mere surrogate for the posterior, as typically the posterior distribution is simply not accessible. However, for decoding RSC codes, there exists efficient dynamic programming that can compute the posterior distribution exactly. This can significantly improve sample complexity of our training, as we are directly providing Pr(b k = 1|y (i) ) as opposed to a sample from this distribution, which is b DISPLAYFORM7 k. We use a python implementation of BCJR in to compute the posterior distribution exactly, and minimize the loss DISPLAYFORM8 Next, we provide a guideline for choosing the training examples that improve the accuracy. As it is natural to sample the training data and test data from the same distribution, one might use the same noise level for testing and training. However, this is not reliable as shown in FIG2. Channel noise is measured by Signal-to-Noise Ratio (SNR) defined as −10 log 10 σ 2 where σ 2 is the variance of the Gaussian noise in the channel. For rate-1/2 RSC code, we propose using training data with noise level SNR train = min{SNR test, 0}. Namely, we propose using training SNR matched to test SNR if test SNR is below 0dB, and otherwise fix training SNR at 0dB independent of the test SNR. In Appendix D, we give a general formula for general rate-r codes, and provide an information theoretic justification and empirical evidences showing that this is near optimal choice of training data. Performance. In FIG3, for various test SNR, we train our N-RSC on randomly generated training data for rate-1/2 RSC code of block length 100 over AWGN channel with proposed training SNR of min{SNR test, 1}. We trained the decoder with Adam optimizer with learning rate 1e-3, batch size 200, and total number of examples is 12,000, and we use clip norm. On the left we show bit-errorrate when tested with length 100 RSC encoder, matching the training data. 1 We show that N-RSC is able to learn to decode and achieve the optimal performance of the optimal dynamic programming (MAP decoder) almost everywhere. Perhaps surprisingly, we show on the right figure that we can use the neural decoder trained on length 100 codes, and apply it directly to codes of length 10, 000 and still meet the optimal performance. Note that we only give 12, 000 training examples, while the number of unique codewords is 2 10,000. This shows that the proposed neural decoder (a) can generalize to unseen codeword; and (b) seamlessly generalizes to significantly longer block lengths. More experimental including other types of convolutional codes are provided in Appendix A.We also note that training with b (i) k in decoding convolutional codes also gives the same final BER performance as training with the posterior Pr(b k = 1|y (i) ).Complexity. When it comes to an implementation of a decoding algorithm, another important metric in evaluating the performance of a decoder is complexity. In this paper our comparison metrics focus on the BER performance; the main claim in this paper is that there is an alternative decoding methodology which has been hitherto unexplored and to point out that this methodology can yield excellent BER performance. Regarding the circuit complexity, we note that in computer vision, there have been many recent ideas to make large neural networks practically implementable in a cell phone. For example, the idea of distilling the knowledge in a large network to a smaller network and the idea of binarization of weights and data in order to do away with complex multiplication operations have made it possible to implement inference on much larger neural networks than the one in this paper in a smartphone BID7 BID8. Such ideas can be utilized in our problem to reduce the complexity as well. A serious and careful circuit implementation complexity optimization and comparison is significantly complicated and is beyond the scope of a single paper. Having said this, a preliminary comparison is as follows. The complexity of all decoders (Viterbi, BCJR, neural decoder) is linear in the number of information bits (block length). The number of multiplications is quadratic in the dimension of hidden states of GRU FORMULA8 for the proposed neural decoder, and the number of encoder states for Viterbi and BCJR algorithms. Turbo codes are naturally built out of convolutional codes (both encoder and decoder) and represent some of the most successful codes for the AWGN channel BID1. A corresponding stacking of multiple layers of the convolutional neural decoders leads to a natural neural turbo decoder which we show to match (and in some regimes even beat) the performance of standard state of the art turbo decoders on the AWGN channel; these details are available in Appendix B. Unlike the convolutional codes, the state of the art (message-passing) decoders for turbo codes are not the corresponding MAP decoders, so there is no contradiction in that our neural decoder would beat the message-passing ones. The training and architectural choices are similar in spirit to those of the convolutional code and are explored in detail in Appendix B. In the previous sections, we demonstrated that the neural decoder can perform as well as the turbo decoder. In practice, there are a wide variety of channel models that are suited for differing applications. Therefore, we test our neural decoder under some canonical channel models to see how robust and adaptive they are. Robustness refers to the ability of a decoder trained for a particular channel model to work well on a differing channel model without re-training. Adaptivity refers to the ability of the learning algorithm to adapt and retrain for differing channel models. In this section, we demonstrate that the neural turbo decoder is both adaptive and robust by testing on a set of non-Gaussian channel models. Robustness. The robustness test is interesting from two directions, other than obvious practical value. Firstly, it is known from information theory that Gaussian noise is the worst case noise among all noise distributions with a given variance BID22 BID9. Shannon showed in his original paper BID22 ) that among all memoryless noise sequences (with the same average energy), Gaussian noise is the worst in terms of capacity. After a long time, BID9 showed that for any finite block length, the BER achieved by the minimum distance decoder for any noise pdf is lower bounded by the BER for Gaussian noise under the assumption of Gaussian codebook. Since Viterbi decoding is the minimum distance decoder for convolutional codes, it is naturally robust in the precise sense above. On the other hand, turbo decoder does not inherit this property, making it vulnerable to adversarial attacks. We show that the neural decoder is more robust to a non-Gaussian noise, namely, t-distributed noise, than turbo decoder. Secondly, the robust test poses an interesting challenge for neural decoders since deep neural networks are known to misclassify when tested against small adversarial perturbations BID24 BID5 ). While we are not necessarily interested in adversarial perturbations to the input in this paper, it is important for the learning algorithm to be robust against differing noise distributions. We leave research on the robustness to small adversarial perturbations as a future work. For the non-Gaussian channel, we choose the t-distribution family parameterized by parameter ν. We test the performance of both the neural and turbo decoder in this experiment when ν = 3 in Figure 5a and observe that the neural decoder performs significantly better than the standard Turbo decoder (also see FIG0 in Appendix E). In order to understand the reason for such a bad performance of the standard Turbo decoder, we plot the average output log-likelihood ratio (LLR) log p(b k = 1)−log p(b k = 1) as a function of the bit position in Figure 5b, when the input is all-zero codeword. The main issue for the standard decoder is that the LLRs are not calculated accurately (see FIG0 in Appendix E): the LLR is exaggerated in the t-distribution while there is some exaggeration in the neural decoder as well, it is more modest in its prediction leading to more contained error propagation. Adaptivity. A great advantage of neural channel decoder is that the neural network can learn a decoding algorithm even if the channel does not yield to a clean mathematical analysis. Consider a scenario where the transmitted signal is added with a Gaussian noise always, however, with a small probability, a further high variance noise is added. The channel model is mathematically described as follows, with y i describing the received symbol and x i denoting the transmitted symbol at time instant i: DISPLAYFORM0, and w i ∼ N (0, σ 2 b) with probability ρ and w i = 0 with probability 1 − ρ, i.e., z i denotes the Gaussian noise whereas w i denotes the bursty noise. This channel model accurately describes how radar signals (which are bursty) can create an interference for LTE in next generation wireless systems. This model has attracted attention in communications systems community due to its practical relevance BID19 BID20. Under the aforesaid channel model, it turns out that standard Turbo coding decoder fails very badly . The reason that the Turbo decoder cannot be modified in a straight-forward way is that the location of the bursty noise is a latent variable that needs to be jointly decoded along with the message bits. In order to combat this particular noise model, we fine-tune our neural decoder on this noise model, initialized from the AWGN neural decoder, and term it the bursty neural decoder. There are two state-of-the-art heuristics BID18: (a) erasure-thresholding: all LLR above a threshold are set to 0 (b) saturation-thresholding: all LLR above a threshold are set to the (signed) threshold. We demonstrate the performance of our AWGN neural decoder (trained on Gaussian noise) as well as standard turbo decoder (for Gaussian noise) on this problem, shown in Figure 6 when σ b = 3.5, 2, 5. We summarize the of Figure 6: FORMULA3 Interpreting the Neural Decoder We try to interpret the action of the neural decoder trained under bursty noise. To do so, we look at the following simplified model, where y i = x i + z i + w i where x i, y i, z i are as before, but w i = B during the 50-th symbol in a 100-length codeword. We also fix the input codeword to be the all-zero codeword. We look at the average output LLR as a function of position for the one round of the neural decoder in FIG6 and one round of BCJR algorithm in FIG6 (the BER as a function of position is shown in FIG0 in Appendix E). A negative LLR implies correct decoding at this level and a positive LLR implies incorrect decoding. It is evident that both RNN and BCJR algorithms make errors concentrated around the mid-point of the codeword. However, what is different between the two figures is that the scale of likelihoods of the two figures are quite different: the BCJR has a high sense of (misplaced) confidence, whereas the RNN is more modest in its assessment of its confidence. In the later stages of the decoding, the exaggerated sense of confidence of BCJR leads to an error propagation cascade eventually toggling other bits as well. In this paper we have demonstrated that appropriately designed and trained RNN architectures can'learn' the landmark algorithms of Viterbi and BCJR decoding based on the strong generalization capabilities we demonstrate. This is similar in spirit to recent works on'program learning' in the literature BID14 BID2. In those works, the learning is assisted significantly by a low level program trace on an input; here we learn the Viterbi and BCJR algorithms only by end-to-end training samples; we conjecture that this could be related to the strong "algebraic" nature of the Viterbi and BCJR algorithms. The representation capabilities and learnability of the RNN architectures in decoding existing codes suggest a possibility that new codes could be leant on the AWGN channel itself and improve the state of the art (constituted by turbo, LDPC and polar codes). Also interesting is a new look at classical multi-terminal communication problems, including the relay and interference channels. Both are active areas of present research. The rate-1/2 RSC code introduced in Section 2 is one example of many convolutional codes. In this section, we show empirically that neural decoders can be trained to decode other types of convolutional codes as well as MAP decoder. We consider the following two convolutional codes. Unlike the rate-1/2 RSC code in Section 2, the convolutional code in Figure 8(a) is not recursive, i.e., state does not have a feedback. Also, it is non-systematic, i.e., the message bits can not be seen immediately from the coded bits. The convolutional code in Figure 8 (b) is another type of rate-1/2 RSC code with a larger state dimension (dimension 3 instead of 2). Figure 8 show the architecture of neural network we used for the convolutional codes in Figure 8. For the code in Figure 8(a), we used the exact same architecture we used for the rate-1/2 RSC code in Section 2. For the code in Figure 8 (b), we used a larger network (LSTM instead of GRU and 800 hidden units instead of 400). This is due to the increased state dimension in the encoder. Output For training of neural decoder in Figure 8 (a), we used 12000 training examples of block length 100 with fixed SNR 0dB. For training convolutional code (b), we used 48000 training examples of block length 500. We set batch size 200 and clip norm. The convolutional code (b) has a larger state space. DISPLAYFORM0 Performance. In FIG0 show the BER and BLER of the trained neural decoder for convolutional code in Figure 8 (a) under various SNRs and block lengths. As we can see from these figures, neural decoder trained on one SNR (0dB) and short block length can be generalized to decoding as good as MAP decoder under various SNRs and block lengths. Similarly in FIG0, we show the BER and BLER performances of trained neural decoder for convolutional code in Figure 8 (b), which again shows the generalization capability of the trained neural decoder. Turbo codes, also called parallel concatenated convolutional codes, are popular in practice as they significantly outperform RSC codes. We provide a neural decoder for turbo codes using multiple layers of neural decoder we introduced for RSC codes. An example of rate-1/3 turbo code is shown in FIG0. Two identical rate-1/2 RSC encoders are used, encoder 1 with original sequence b as input and encoder 2 with a randomly permuted version of b as input. Interleaver performs the random permutation. As the first output sequence c 1 of encoder 1 is identical to the output sequence c 1 of encoder 2, and hence redundant. So the sequence c 1 is thrown away, and the rest of the sequences (c 1, c 2, c 2) are transmitted; hence, rate is 1/3.The sequences (c 1, c 2, c 2) are transmitted over AWGN channel, and the noisy received sequences are (y 1, y 2, y 2). Due to the interleaved structure of the encoder, MAP decoding is computationally intractable. Instead, an iterative decoder known as turbo decoder is used in practice, which uses the RSC MAP decoder (BCJR algorithm) as a building block. At first iteration the standard BCJR estimates the posterior Pr(b k |y 1, y 2) with uniform prior on b k for all k ∈ [K]. Next, BCJR estimates Pr(b k |π(y 1), y 2) with the interleaved sequence π(y 1), but now takes the output of the first layer as a prior on b k' s. This process is repeated, refining the belief on what the codewords b k's are, until convergence and an estimation is made in the end for each bit. Training. We propose a neural decoder for turbo codes that we call N-Turbo in FIG0. Following the deep layered architecture of the turbo decoder, we stack layers of a variation of our N-RSC decoder, which we call N-BCJR. However, end-to-end training (using examples of the input sequence SNR for block length 100, 1000, and 10,000 y (i)'s and the message sequence of b (i)'s) of such a deep layers of recurrent architecture is challenging. We propose first training each layer separately, use these trained models as initializations, and train the deep layered neural decoder of N-Turbo starting from these initialized weights. We first explain our N-BCJR architecture, which is a new type of N-RSC that can take flexible bit-wise prior distribution as input. Previous N-RSC we proposed is customized for uniform prior distribution. The architecture is similar to the one for N-RSC. The main difference is input size (3 instead of 2) and the type of RNN (LSTM instead of GRU). To generate N training examples of {(noisycodeword, prior), posterior}, we generate N/12 examples of turbo codes. Then we ran turbo decoder for 12 component decoding -and collect input output pairs from the 12 intermediate steps of Turbo decoder, implemented in python shown in FIG0.We train with codes with blocklength 100 at fixed SNR -1dB. We use mean squared error in as a cost function. To generate training examples with non-zero priors, i.e. example of a triplet (prior probabilities {Pr(b k)} of the intermediate layers, we take as an example the triplet: the input prior probability, the input sequence, and the output of the BJCR layer. We fix training SNR to be -1dB. We stack 6 layers of BCJR decoder with interleavers in between. The last layer of our neural decoder is trained slightly differently to output the estimated message bit and not the posterior probability. Accordingly, we use binary crossentropy loss of as a cost function. We train each N-BCJR layer with 2,000 examples of length 100 turbo encoder, and in the end-to-end training of N-Turbo, we train with 1,000 examples of length 1,000 turbo encoder. We train with 10 epochs and ADAM optimizer with learning rate 0.001. For the end-to-end training, we again use a fixed SNR of noise (-1dB), and test on various SNRs. The choice of training SNR is discussed in detail in the Appendix D.Performance. As can be seen in FIG0, the proposed N-Turbo meets the performance of turbo decoder for block length 100, and in some cases, for test SNR= 2, it achieves a higher accuracy. Similar to N-RSC, N-Turbo generalizes to unseen codewords, as we only show 3, 000 examples in total. It also seamlessly generalizes in the test SNR, as training SNR is fixed at −1dB. In this section, we show the performances of neural networks of various recurrent network architectures in decoding rate-1/2 RSC code and in learning BCJR algorithm with non-zero priors. Table 1 shows the BER of various types of recurrent neural networks trained under the same condition as in N-RSC (120000 example, code length 100). We can see that BERs of the 1-layered RNN and single-directional RNN are order-wise worse than the one of 2-layered GRU (N-RSC), and two layers is sufficient. Table 2 shows the performance of neural networks of various recurrent network As it is natural to sample the training data and test data from the same distribution, one might use the same noise level for testing and training. However, this matched SNR is not reliable as shown in FIG2. We give an analysis that predicts the appropriate choice of training SNR that might be different from testing SNR, and justify our choice via comparisons over various pairs of training and testing SNRs. We conjecture that the optimal training SNR that gives best BER for a target testing SNR depends on the coding rate. A coding rate is defined as the ratio between the length of the message bit sequence K and the length of the transmitted codeword sequence c. The example we use in this paper is a rate r = 1/2 code with length of c equal to 2K. For a rate r code, we propose using training SNR according to DISPLAYFORM0 and call the knee of this curve f (r) = 10 log 10 (2 2r − 1) a threshold. In particular, this gives SNR train = min{SNR test, 0} for rate 1/2 codes. In FIG0 left, we train our neural decoder for RSC encoders of varying rates of r ∈ {1/2, 1/3, 1/4, 1/5, 1/6, 1/7} whose corresponding f (r) = {0, −2.31, −3.82, −4.95, −5.85, −6.59}. f (r) is plotted as a function of the rate r in FIG0 right panel. Compared to the grey shaded region of empirically observed region of training SNR that achieves the best performance, we see that it follows the theoretical prediction up to a small shift. The figure on the left shows empirically observed best SNR for training at each testing SNR for various rate r codes. We can observe that it follows the trend of the theoretical prediction of a curve Table 2: MSE of trained neural models with different number/type of RNN layers in learning BCJR algorithm with non-zero priors with a knee. Before the threshold, it closely aligns with the 45-degree line SNR train = SNR test. around the threshold, the curves become constant functions. We derive the formula in in two parts. When the test SNR is below the threshold, then we are targeting for bit error rate (and similarly the block error rate) of around 10 −1 ∼ 10 −2. This implies that significant portion of the testing examples lie near the decision boundary of this problem. Hence, it makes sense to show matching training examples, as significant portion of the training examples will also be at the boundary, which is what we want in order to maximize the use of the samples. On the other hand, when we are above the threshold, our target bit-error-rate can be significantly smaller, say 10 −6. In this case, most of the testing examples are easy, and only a very small proportion of the testing examples lie at the decision boundary. Hence, if we match training SNR, most of the examples will be wasted. Hence, we need to show those examples at the decision boundary, and we propose that the training examples from SNR 10 log 10 (2 2r − 1) should lie near the boundary. This is a crude estimate, but effective, and can be computed using the capacity achieving random codes for AWGN channels and the distances between the codes words at capacity. Capacity is a fundamental limit on what rate can be used at a given test SNR to achieve small error. In other words, for a given test SNR over AWGN channel, Gaussian capacity gives how closely we can pack the codewords (the classes in our classification problem) so that they are as densely packed as possible. This gives us a sense of how decision boundaries (as measured by the test SNR) depend on the rate. It is given by the Gaussian capacity rate = 1/2 log(1 + SN R). Translating this into our setting, we set the desired threshold that we seek. | We show that creatively designed and trained RNN architectures can decode well known sequential codes and achieve close to optimal performances. | 606 | scitldr |
Adam is shown not being able to converge to the optimal solution in certain cases. Researchers recently propose several algorithms to avoid the issue of non-convergence of Adam, but their efficiency turns out to be unsatisfactory in practice. In this paper, we provide a new insight into the non-convergence issue of Adam as well as other adaptive learning rate methods. We argue that there exists an inappropriate correlation between gradient $g_t$ and the second moment term $v_t$ in Adam ($t$ is the timestep), which in that a large gradient is likely to have small step size while a small gradient may have a large step size. We demonstrate that such unbalanced step sizes are the fundamental cause of non-convergence of Adam, and we further prove that decorrelating $v_t$ and $g_t$ will lead to unbiased step size for each gradient, thus solving the non-convergence problem of Adam. Finally, we propose AdaShift, a novel adaptive learning rate method that decorrelates $v_t$ and $g_t$ by temporal shifting, i.e., using temporally shifted gradient $g_{t-n}$ to calculate $v_t$. The experiment demonstrate that AdaShift is able to address the non-convergence issue of Adam, while still maintaining a competitive performance with Adam in terms of both training speed and generalization. First-order optimization algorithms with adaptive learning rate play an important role in deep learning due to their efficiency in solving large-scale optimization problems. Denote g t ∈ R n as the gradient of loss function f with respect to its parameters θ ∈ R n at timestep t, then the general updating rule of these algorithms can be written as follows : DISPLAYFORM0 In the above equation, m t φ(g 1, . . ., g t) ∈ R n is a function of the historical gradients; v t ψ(g 1, . . ., g t) ∈ R n + is an n-dimension vector with non-negative elements, which adapts the learning rate for the n elements in g t respectively; α t is the base learning rate; and αt √ vt is the adaptive step size for m t.One common choice of φ(g 1, . . ., g t) is the exponential moving average of the gradients used in Momentum and Adam , which helps alleviate gradient oscillations. The commonly-used ψ(g 1, . . ., g t) in deep learning community is the exponential moving average of squared gradients, such as Adadelta , RMSProp , Adam and Nadam .Adam is a typical adaptive learning rate method, which assembles the idea of using exponential moving average of first and second moments and bias correction. In general, Adam is robust and efficient in both dense and sparse gradient cases, and is popular in deep learning research. However, Adam is shown not being able to converge to optimal solution in certain cases. point out that the key issue in the convergence proof of Adam lies in the quantity DISPLAYFORM1 which is assumed to be positive, but unfortunately, such an assumption does not always hold in Adam. They provide a set of counterexamples and demonstrate that the violation of positiveness of Γ t will lead to undesirable convergence behavior in Adam. then propose two variants, AMSGrad and AdamNC, to address the issue by keeping Γ t positive. Specifically, AMSGrad definesv t as the historical maximum of v t, i.e.,v t = max {v i} t i=1, and replaces v t withv t to keep v t non-decreasing and therefore forces Γ t to be positive; while AdamNC forces v t to have "long-term memory" of past gradients and calculates v t as their average to make it stable. Though these two algorithms solve the non-convergence problem of Adam to a certain extent, they turn out to be inefficient in practice: they have to maintain a very large v t once a large gradient appears, and a large v t decreases the adaptive learning rate αt √ vt and slows down the training process. In this paper, we provide a new insight into adaptive learning rate methods, which brings a new perspective on solving the non-convergence issue of Adam. Specifically, in Section 3, we study the counterexamples provided by via analyzing the accumulated step size of each gradient g t. We observe that in the common adaptive learning rate methods, a large gradient tends to have a relatively small step size, while a small gradient is likely to have a relatively large step size. We show that the unbalanced step sizes stem from the inappropriate positive correlation between v t and g t, and we argue that this is the fundamental cause of the non-convergence issue of Adam. In Section 4, we further prove that decorrelating v t and g t leads to equal and unbiased expected step size for each gradient, thus solving the non-convergence issue of Adam. We subsequently propose AdaShift, a decorrelated variant of adaptive learning rate methods, which achieves decorrelation between v t and g t by calculating v t using temporally shifted gradients. Finally, in Section 5, we study the performance of our proposed AdaShift, and demonstrate that it solves the non-convergence issue of Adam, while still maintaining a decent performance compared with Adam in terms of both training speed and generalization. Adam. In Adam, m t and v t are defined as the exponential moving average of g t and g 2 t: m t = β 1 m t−1 + (1 − β 1)g t and v t = β 2 v t−1 + (1 − β 2)g 2 t,where β 1 ∈ and β 2 ∈ are the exponential decay rates for m t and v t, respectively, with m 0 = 0 and v 0 = 0. They can also be written as: DISPLAYFORM0 To avoid the bias in the estimation of the expected value at the initial timesteps, propose to apply bias correction to m t and v t. Using m t as instance, it works as follows: DISPLAYFORM1 Online optimization problem. An online optimization problem consists of a sequence of cost functions f 1 (θ),..., f t (θ),..., f T (θ), where the optimizer predicts the parameter θ t at each timestep t and evaluate it on an unknown cost function f t (θ). The performance of the optimizer is usually evaluated by regret R(T) DISPLAYFORM2, which is the sum of the difference between the online prediction f t (θ t) and the best fixed-point parameter prediction f t (θ *) for all the previous steps, where θ * = arg min θ∈ϑ T t=1 f t (θ) is the best fixed-point parameter from a feasible set ϑ.Counterexamples. highlight that for any fixed β 1 and β 2, there exists an online optimization problem where Adam has non-zero average regret, i.e., Adam does not converge to optimal solution. The counterexamples in the sequential version are given as follows: DISPLAYFORM3 where C is a relatively large constant and d is the length of an epoch. In Equation 6, most gradients of f t (θ) with respect to θ are −1, but the large positive gradient C at the beginning of each epoch makes the overall gradient of each epoch positive, which means that one should decrease θ t to minimize the loss. However, according to , the accumulated update of θ in Adam under some circumstance is opposite (i.e., θ t is increased), thus Adam cannot converge in such case. argue that the reason of the non-convergence of Adam lies in that the positive assumption of Γ t (√ v t /α t − √ v t−1 /α t−1) does not always hold in Adam. The counterexamples are also extended to stochastic cases in , where a finite set of cost functions appear in a stochastic order. Compared with sequential online optimization counterexample, the stochastic version is more general and closer to the practical situation. For the simplest one dimensional case, at each timestep t, the function f t (θ) is chosen as i.i.d.: DISPLAYFORM4 where δ is a small positive constant that is smaller than C. The expected cost function of the above problem is F (θ) = 1+δ C+1 Cθ − C−δ C+1 θ = δθ, therefore, one should decrease θ to minimize the loss. prove that when C is large enough, the expectation of accumulated parameter update in Adam is positive and in increasing θ. propose maintaining the strict positiveness of Γ t as solution, for example, keeping v t non-decreasing or using increasing β 2. In fact, keeping Γ t positive is not the only way to guarantee the convergence of Adam. Another important observation is that for any fixed sequential online optimization problem with infinitely repeating epochs (e.g., Equation 6), Adam will converge as long as β 1 is large enough. Formally, we have the following theorem: Theorem 1 (The influence of β 1). For any fixed sequential online convex optimization problem with infinitely repeating of finite length epochs (d is the length of an epoch), if ∃G ∈ R such that DISPLAYFORM0 for any fixed β 2 ∈, there exists a β 1 ∈ such that Adam has average regret ≤ 2;The intuition behind Theorem 1 is that, if DISPLAYFORM1 In this section, we study the non-convergence issue by analyzing the counterexamples provided by. We show that the fundamental problem of common adaptive learning rate methods is that: v t is positively correlated to the scale of gradient g t, which in a small step size α t / √ v t for a large gradient, and a large step size for a small gradient. We argue that such an unbalanced step size is the cause of non-convergence. We will first define net update factor for the analysis of the accumulated influence of each gradient g t, then apply the net update factor to study the behaviors of Adam using Equation 6 as an example. The argument will be extended to the stochastic online optimization problem and general cases. When β 1 = 0, due to the exponential moving effect of m t, the influence of g t exists in all of its following timesteps. For timestep i (i ≥ t), the weight of g t is (1 − β 1)β i−t 1. We accordingly define a new tool for our analysis: the net update net(g t) of each gradient g t, which is its accumulated influence on the entire optimization process: DISPLAYFORM0 and we call k(g t) the net update factor of g t, which is the equivalent accumulated step size for gradient g t. Note that k(g t) depends on {v i} It is worth noticing that in Momentum method, v t is equivalently set as 1. Therefore, we have k(g t) = α t and net(g t) = α t g t, which means that the accumulated influence of each gradient g t in Momentum is the same as vanilla SGD (Stochastic Gradient Decent). Hence, the convergence of Momentum is similar to vanilla SGD. However, in adaptive learning rate methods, v t is function over the past gradients, which makes its convergence nontrivial. Note that v t exists in the definition of net update factor (Equation 8). Before further analyzing the convergence of Adam using the net update factor, we first study the pattern of v t in the sequential online optimization problem in Equation 6. Since Equation 6 is deterministic, we can derive the formula of v t as follows:Lemma 2. In the sequential online optimization problem in Equation 6, denote β 1, β 2 ∈ as the decay rates, d ∈ N as the length of an epoch, n ∈ N as the index of epoch, and i ∈ {1, 2, ..., d} as the index of timestep in one epoch. Then the limit of v nd+i when n → ∞ is: DISPLAYFORM0 Given the formula of v t in Equation 9, we now study the net update factor of each gradient. We start with a simple case where β 1 = 0. In this case we have DISPLAYFORM1 Since the limit of v nd+i in each epoch monotonically decreases with the increase of index i according to Equation 9, the limit of k(g nd+i) monotonically increases in each epoch. Specifically, the first gradient g nd+1 = C in epoch n represents the correct updating direction, but its influence is the smallest in this epoch. In contrast, the net update factor of the subsequent gradients −1 are relatively larger, though they indicate a wrong updating direction. We further consider the general case where β 1 = 0. The is presented in the following lemma:Lemma 3. In the sequential online optimization problem in Equation 6, when n → ∞, the limit of net update factor k(g nd+i) of epoch n satisfies: DISPLAYFORM2 and lim DISPLAYFORM3 where k(C) denotes the net update factor for gradient g i = C.Lemma 3 tells us that, in sequential online optimization problem in Equation 6, the net update factors are unbalanced. Specifically, the net update factor for the large gradient C is the smallest in the entire epoch, while all gradients −1 have larger net update factors. Such unbalanced net update factors will possibly lead Adam to a wrong accumulated update direction. Similar also holds in the stochastic online optimization problem in Equation 7. We derive the expectation of the net update factor for each gradient in the following lemma:Lemma 4. In the stochastic online optimization problem in Equation 7, assuming α t = 1, it holds that k(C) < k(−1), where k(C) denote the expectation net update factor for g i = C and k(−1) denote the expectation net update factor for g i = −1.Though the formulas of net update factors in the stochastic case are more complicated than those in deterministic case, the analysis is actually more easier: the gradients with the same scale share the same expected net update factor, so we only need to analyze k(C) and k(−1). From Lemma 4, we can see that in terms of the expectation net update factor, k(C) is smaller than k(−1), which means the accumulated influence of gradient C is smaller than gradient −1. As we have observed in the previous section, a common characteristic of these counterexamples is that the net update factor for the gradient with large magnitude is smaller than these with small magnitude. The above observation can also be interpreted as a direct consequence of inappropriate correlation between v t and g t. Recall that v t = β 2 v t−1 + (1 − β 2)g 2 t. Assuming v t−1 is independent of g t, then: when a new gradient g t arrives, if g t is large, v t is likely to be larger; and if g t is small, v t is also likely to be smaller. If β 1 = 0, then k(g t) = α t / √ v t. As a , a large gradient is likely to have a small net update factor, while a small gradient is likely to have a large net update factor in Adam. When it comes to the scenario where β 1 > 0, the arguments are actually quite similar. Given DISPLAYFORM0 are independent from g t, then: not only does v t positively correlate with the magnitude of g t, but also the entire infinite sequence {v i} ∞ i=t positively correlates with the magnitude of g t. Since the net update factor k(DISPLAYFORM1, it is thus negatively correlated with the magnitude of g t . That is, k(g t) for a large gradient is likely to be smaller, while k(g t) for a small gradient is likely to be larger. The unbalanced net update factors cause the non-convergence problem of Adam as well as all other adaptive learning rate methods where v t correlates with g t. To construct a counterexample, the same pattern is that: the large gradient is along the "correct" direction, while the small gradient is along the opposite direction. Due to the fact that the accumulated influence of a large gradient is small while the accumulated influence of a small gradient is large, Adam may update parameters along the wrong direction. Finally, we would like to emphasize that even if Adam updates parameters along the right direction in general, the unbalanced net update factors are still unfavorable since they slow down the convergence. According to the previous discussion, we conclude that the main cause of the non-convergence of Adam is the inappropriate correlation between v t and g t. Currently we have two possible solutions: making v t act like a constant, which declines the correlation, e.g., using a large β 2 or keep v t nondecreasing ; using a large β 1 (Theorem 1), where the aggressive momentum term helps to mitigate the impact of unbalanced net update factors. However, neither of them solves the problem fundamentally. The dilemma caused by v t enforces us to rethink its role. In adaptive learning rate methods, v t plays the role of estimating the second moments of gradients, which reflects the scale of gradient on average. With the adaptive learning rate α t / √ v t, the update step of g t is scaled down by √ v t and achieves rescaling invariance with respect to the scale of g t, which is practically useful to make the training process easy to control and the training system robust. However, the current scheme of v t, i.e., v t = β 2 v t−1 + (1 − β 2)g 2 t, brings a positive correlation between v t and g t, which in reducing the effect of large gradients and increasing the effect of small gradients, and finally causes the non-convergence problem. Therefore, the key is to let v t be a quantity that reflects the scale of the gradients, while at the same time, be decorrelated with current gradient g t. Formally, we have the following theorem:Theorem 5 (Decorrelation leads to convergence). For any fixed online optimization problem with infinitely repeating of a finite set of cost functions {f 1 (θ),..., f t (θ),... f n (θ)}, assuming β 1 = 0 and α t is fixed, we have, if v t follows a fixed distribution and is independent of the current gradient g t, then the expected net update factor for each gradient is identical. Let P v denote the distribution of v t. In the infinitely repeating online optimization scheme, the expectation of net update factor for each gradient g t is DISPLAYFORM0 Given P v is independent of g t, the expectation of the net update factor E[k(g t)] is independent of g t and remains the same for different gradients. With the expected net update factor being a fixed constant, the convergence of the adaptive learning rate method reduces to vanilla SGD.Momentum can be viewed as setting v t as a constant, which makes v t and g t independent. Furthermore, in our view, using an increasing β 2 (AdamNC) or keepingv t as the largest v t (AMSGrad) is also to make v t almost fixed. However, fixing v t is not a desirable solution, because it damages the adaptability of Adam with respect to the adapting of step size. We next introduce the proposed solution to make v t independent of g t, which is based on temporal independent assumption among gradients. We first introduce the idea of temporal decorrelation, then extend our solution to make use of the spatial information of gradients. Finally, we incorporate first moment estimation. The pseudo code of the proposed algorithm is presented as follows. Algorithm 1 AdaShift: Temporal Shifting with Block-wise Spatial Operation DISPLAYFORM1 5: DISPLAYFORM2 end for 9: end for 10: // We ignore the bias-correction, epsilon and other misc for the sake of clarity In practical setting, f t (θ) usually involves different mini-batches x t, i.e., f t (θ) = f (θ; x t). Given the randomness of mini-batch, we assume that the mini-batch x t is independent of each other and further assume that f (θ; x) keeps unchanged over time, then the gradient g t = ∇f (θ; x t) of each mini-batch is independent of each other. Therefore, we could change the update rule for v t to involve g t−n instead of g t, which makes v t and g t temporally shifted and hence decorrelated: DISPLAYFORM0 Note that in the sequential online optimization problem, the assumption "g t is independent of each other" does not hold. However, in the stochastic online optimization problem and practical neural network settings, our assumption generally holds. Most optimization schemes involve a great many parameters. The dimension of θ is high, thus g t and v t are also of high dimension. However, v t is element-wisely computed in Equation 14. Specifically, we only use the i-th dimension of g t−n to calculate the i-th dimension of v t. In other words, it only makes use of the independence between g t−n [i] and g t [i], where g t [i] denotes the i-th element of g t. Actually, in the case of high-dimensional g t and v t, we can further assume that all elements of gradient g t−n at previous timesteps are independent with the i-th dimension of g t. Therefore, all elements in g t−n can be used to compute v t without introducing correlation. To this end, we propose introducing a function φ over all elements of g 2 t−n, i.e., DISPLAYFORM0 For easy reference, we name the elements of g t−n other than g t−n [i] as the spatial elements of g t−n and name φ the spatial function or spatial operation. There is no restriction on the choice of φ, and we use φ(x) = max i x[i] for most of our experiments, which is shown to be a good choice. The max i x[i] operation has a side effect that turns the adaptive learning rate v t into a shared scalar. An important thing here is that, we no longer interpret v t as the second moment of g t. It is merely a random variable that is independent of g t, while at the same time, reflects the overall gradient scale. We leave further investigations on φ as future work. In practical setting, e.g., deep neural network, θ usually consists of many parameter blocks, e.g., the weight and bias for each layer. In deep neural network, the gradient scales (i.e., the variance) for different layers tend to be different . Different gradient scales make it hard to find a learning rate that is suitable for all layers, when using SGD and Momentum methods. In traditional adaptive learning rate methods, they apply element-wise rescaling for each gradient dimension, which achieves rescaling-invariance and somehow solves the above problem. However, Adam sometimes does not generalize better than SGD , which might relate to the excessive learning rate adaptation in Adam. In our temporal decorrelation with spatial operation scheme, we can solve the "different gradient scales" issue more naturally, by applying φ block-wisely and outputs a shared adaptive learning rate scalar v t [i] for each block: DISPLAYFORM0 It makes the algorithm work like an adaptive learning rate SGD, where each block has an adaptive learning rate α t / v t [i] while the relative gradient scale among in-block elements keep unchanged. As illustrated in Algorithm 1, the parameters θ t including the related g t and v t are divided into M blocks. Every block contains the parameters of the same type or same layer in neural network. First moment estimation, i.e., defining m t as a moving average of g t, is an important technique of modern first order optimization algorithms, which alleviates mini-batch oscillations. In this section, we extend our algorithm to incorporate first moment estimation. We have argued that v t needs to be decorrelated with g t. Analogously, when introducing the first moment estimation, we need to make v t and m t independent to make the expected net update factor unbiased. Based on our assumption of temporal independence, we further keep out the latest n gradients {g t−i} n−1 i=0, and update v t and m t via DISPLAYFORM0 In Equation 17, β 1 ∈ plays the role of decay rate for temporal elements. It can be viewed as a truncated version of exponential moving average that only applied to the latest few elements. Since we use truncating, it is feasible to use large β 1 without taking the risk of using too old gradients. In the extreme case where β 1 = 1, it becomes vanilla averaging. The pseudo code of the algorithm that unifies all proposed techniques is presented in Algorithm 1 and a more detailed version can be found in the Appendix. It has the following parameters: spatial operation φ, n ∈ N +, β 1 ∈, β 2 ∈ and α t. The key difference between Adam and the proposed method is that the latter temporally shifts the gradient g t for n-step, i.e., using g t−n for calculating v t and using the kept-out n gradients to evaluate m t (Equation 17), which makes v t and m t decorrelated and consequently solves the nonconvergence issue. In addition, based on our new perspective on adaptive learning rate methods, v t is not necessarily the second moment and it is valid to further involve the calculation of v t with the spatial elements of previous gradients. We thus proposed to introduce the spatial operation φ that outputs a shared scalar for each block. The ing algorithm turns out to be closely related to SGD, where each block has an overall adaptive learning rate and the relative gradient scale in each block is maintained. We name the proposed method that makes use of temporal-shifting to decorrelated v t and m t AdaShift, which means "ADAptive learning rate method with temporal SHIFTing". In this section, we empirically study the proposed method and compare them with Adam, AMSGrad and SGD, on various tasks in terms of training performance and generalization. Without additional declaration, the reported for each algorithm is the best we have found via parameter grid search. The anonymous code is provided at http://bit.ly/2NDXX6x. Firstly, we verify our analysis on the stochastic online optimization problem in Equation 7, where we set C = 101 and δ = 0.02. We compare Adam, AMSGrad and AdaShift in this experiment. For fair comparison, we set α = 0.001, β 1 = 0 and β 2 = 0.999 for all these methods. The are shown in FIG1. We can see that Adam tends to increase θ, that is, the accumulate update of θ in Adam is along the wrong direction, while AMSGrad and AdaShift update θ in the correct direction. Furthermore, given the same learning rate, AdaShift decreases θ faster than AMSGrad, which validates our argument that AMSGrad has a relatively higher v t that slows down the training. In this experiment, we also verify Theorem 1. As shown in FIG1, Adam is also able to converge to the correct direction with a sufficiently large β 1 and β 2. Note that AdaShift still converges with the fastest speed; a small β 1 (e.g., β 1 = 0.9, the light-blue line in FIG1) does not make Adam converge to the correct direction. We do not conduct the experiments on the sequential online optimization problem in Equation 6, because it does not fit our temporal independence assumption. To make it converge, one can use a large β 1 or β 2, or set v t as a constant. We further compare the proposed method with Adam, AMSGrad and SGD by using Logistic Regression and Multilayer Perceptron on MNIST, where the Multilayer Perceptron has two hidden layers and each has 256 hidden units with no internal activation. The are shown in Figure 2 and Figure 3, respectively. We find that in Logistic Regression, these learning algorithms achieve very similar final in terms of both training speed and generalization. In Multilayer Perceptron, we compare Adam, AMSGrad and AdaShift with reduce-max spatial operation (max-AdaShift) and without spatial operation (non-AdaShift). We observe that max-AdaShift achieves the lowest training loss, while non-AdaShift has mild training loss oscillation and at the same time achieves better generalization. The worse generalization of max-AdaShift may be due to overfitting in this task, and the better generalization of non-AdaShift may stem from the regularization effect of its relatively unstable step size. ResNet and DenseNet are two typical modern neural networks, which are efficient and widely-used. We test our algorithm with ResNet and DenseNet on CIFAR-10 datasets. We use a 18-layer ResNet and 100-layer DenseNet in our experiments. We plot the best of Adam, AMSGrad and AdaShift in Figure 4 and Figure 5 for ResNet and DenseNet, respectively. We can see that AMSGrad is relatively worse in terms of both training speed and generalization. Adam and AdaShift share competitive , while AdaShift is generally slightly better, especially the test accuracy of ResNet and the training loss of DenseNet. We further increase the complexity of dataset, switching from CIFAR-10 to Tiny-ImageNet, and compare the performance of Adam, AMSGrad and AdaShift with DenseNet. The are shown in FIG3, from which we can see that the training curves of Adam and AdaShift are basically overlapped, but AdaShift achieves higher test accuracy than Adam. AMSGrad has relatively higher training loss, and its test accuracy is relatively lower at the initial stage. We also test our algorithm on the training of generative model and recurrent model. We choose WGAN-GP that involves Lipschitz continuity condition (which is hard to optimize), and Neural Machine Translation (NMT) that involves typical recurrent unit LSTM, respectively. In FIG4, we compare the performance of Adam, AMSGrad and AdaShift in the training of WGAN-GP discriminator, given a fixed generator. We notice that AdaShift is significantly better than Adam, while the performance of AMSGrad is relatively unsatisfactory. The test performance in terms of BLEU of NMT is shown in FIG4, where AdaShift achieves a higher BLEU than Adam and AMSGrad. In this paper, we study the non-convergence issue of adaptive learning rate methods from the perspective of the equivalent accumulated step size of each gradient, i.e., the net update factor defined in this paper. We show that there exists an inappropriate correlation between v t and g t, which leads to unbalanced net update factor for each gradient. We demonstrate that such unbalanced step sizes are the fundamental cause of non-convergence of Adam, and we further prove that decorrelating v t and g t will lead to unbiased expected step size for each gradient, thus solving the non-convergence problem of Adam. Finally, we propose AdaShift, a novel adaptive learning rate method that decorrelates v t and g t via calculating v t using temporally shifted gradient g t−n.In addition, based on our new perspective on adaptive learning rate methods, v t is no longer necessarily the second moment of g t, but a random variable that is independent of g t and reflects the overall gradient scale. Thus, it is valid to calculate v t with the spatial elements of previous gradients. We further found that when the spatial operation φ outputs a shared scalar for each block, the ing algorithm turns out to be closely related to SGD, where each block has an overall adaptive learning rate and the relative gradient scale in each block is maintained. The experiment demonstrate that AdaShift is able to solve the non-convergence issue of Adam. In the meantime, AdaShift achieves competitive and even better training and testing performance when compared with Adam. FIG7. It suggests that for a fixed sequential online optimization problem, both of β 1 and β 2 determine the direction and speed of Adam optimization process. Furthermore, we also study the threshold point of C and d, under which Adam will change to the incorrect direction, for each fixed β 1 and β 2 that vary among. To simplify the experiments, we keep d = C such that the overall gradient of each epoch being +1. The is shown in FIG7, which suggests, at the condition of larger β 1 or larger β 2, it needs a larger C to make Adam stride on the opposite direction. In other words, large β 1 and β 2 will make the non-convergence rare to happen. We also conduct the experiment in the stochastic problem to analyze the relation among C, β 1, β 2 and the convergence behavior of Adam. Results are shown in the FIG7 and FIG7 and the observations are similar to the previous: larger C will cause non-convergence more easily and a larger β 1 or β 2 somehow help to resolve non-convergence issue. In this experiment, we set δ = 1.Lemma 6 (Critical condition). In the sequential online optimization problem Equation 6, let α t being fixed, define S(β 1, β 2, C, d) to be the sum of the limits of step updates in a d-step epoch: DISPLAYFORM0 Let S(β 1, β 2, C) = 0, assuming β 2 and C are large enough such that v t 1, we get the equation: DISPLAYFORM1 Equation FORMULA0, though being quite complex, tells that both β 1 and β 2 are closely related to the counterexamples, and there exists a critical condition among these parameters. Algorithm 2 AdaShift: We use a first-in-first-out queue Q to denote the averaging window with the length of n. P ush(Q, g t) denotes pushing vector g t to the tail of Q, while P op(Q) pops and returns the head vector of Q. And W is the weight vector calculated via β 1. DISPLAYFORM0 3: for t = 1 to T do 4: DISPLAYFORM1 if t ≤ n then 6: DISPLAYFORM2 else 8: DISPLAYFORM3 P ush(Q, g t)10: DISPLAYFORM4 p t = p t−1 β 2 for i = 1 to M do 13: DISPLAYFORM0 14: DISPLAYFORM1 DISPLAYFORM2 To verify the temporal correlation between g t [i] and g t−n [i], we range n from 1 to 10 and calculate the average temporal correlation coefficient of all variables i. Results are shown in TAB3. To verify the spatial correlation between g t [i] and g t−n [j], we again range n from 1 to 10 and randomly sample some pairs of i and j and calculate the average spatial correlation coefficient of all the selected pairs. Results are shown in [i] and v t within max-AdaShift, we range the keep number n from 1 to 10 to calculate v t and the average correlation coefficient of all variables i. The is shown in Table 3 and Table 4. With bias correction, the formulation of m t is written as follows DISPLAYFORM3 According to L'Hospitals rule, we can draw the following: DISPLAYFORM4 Thus, DISPLAYFORM5 According to the definition of limitation, let g DISPLAYFORM6 2 So, m t shares the same sign with g * in every dimension. Given it is a convex optimization problem, let the optimal parameter be θ *, and the maximum step size is DISPLAYFORM7 Given ∇f t (θ) ∞ ≤ G, we have f t (θ) − f t (θ *) < 2, which implies the average regret DISPLAYFORM8 E PROOF OF LEMMA 2 DISPLAYFORM9 For a fixed d, as n approach infinity, we get the limit of m nd+i as: DISPLAYFORM10 Similarly, for v nd+i: DISPLAYFORM11 For a fixed d, as n approach infinity, we get the limit of v nd+i as: DISPLAYFORM12 Proof. First, we define V i as: DISPLAYFORM0 where 1 ≤ i ≤ d and i ∈ N. And V i has a period of d. Let t = t − nd, then we can draw: DISPLAYFORM1 Thus, we can get the forward difference of k(g nd+i) as: DISPLAYFORM2 V nd+1i monotonically increases within one period, when 1 ≤ i ≤ d and i ∈ N. And the weigh β j 1 for every difference term V j+i+1 − V i is fixed when i varies. Thus, the weighted summation DISPLAYFORM3 is monotonically decreasing from positive to negative. In other words, the forward difference is monotonically decreasing, such that there exists j, 1 ≤ j ≤ d and lim nd→∞ k(g nd+1) is the maximum among all net updates. Moreover, it is obvious that lim DISPLAYFORM4 is the minimum. Hence, we can draw the : DISPLAYFORM5 and lim DISPLAYFORM6 where K(C) is the net update factor for gradient g i = C. Lemma 7. 1 For a bounded random variable X and a differentiable function f (x), the expectation of f (X) is as follows: DISPLAYFORM0 where D(X) is variance of X, and R 3 is as follows: DISPLAYFORM1 F (x) is the distribution function of X. R 3 is a small quantity under some condition. And c is large enough, such that: for any > 0, DISPLAYFORM2 Proof. (Proof of Lemma 4) In the stochastic online optimization problem equation 7, the gradient subjects the distribution as: DISPLAYFORM3 Then we can get the expectation of g i: DISPLAYFORM4 Meanwhile, under the assumption that gradients are i.i.d., the expectation and variance of v i are as following when nd → ∞: DISPLAYFORM5 Then, for the gradient g i, the net update factor is as follows: DISPLAYFORM6 It should to be clarified that we define t j=1 β t−j 2 g 2 i+j equal to zero when t = 0. Then we define X t as: DISPLAYFORM7 DISPLAYFORM8 DISPLAYFORM9 = β 2(t+1) 2 DISPLAYFORM10 DISPLAYFORM11 f (x) = 3 · x −5/2 8 According to lemma 7, we can the expectation of f (X t) as follows: DISPLAYFORM12 DISPLAYFORM13 where DISPLAYFORM14 Then for gradient C and −1, the net update factor is as follows: DISPLAYFORM15 and DISPLAYFORM16 We can see that each term in the infinite series of k(C) is smaller than the corresponding one in k(−1). Thus, k(C) < k(−1). Proof. From Lemma 2, we can get: DISPLAYFORM0 We sum up all updates in an epoch, and define the summation as S(β 1, β 2, C). DISPLAYFORM1 Assume β 2 and C are large enough such that v t 1, we get the approximation of limit of v nd+i as: DISPLAYFORM2 Then we can draw the expression of S(β 1, β 2, C) as: DISPLAYFORM3 Let S(β 1, β 2, C) = 0, we get the equation about critical condition: DISPLAYFORM4 I HYPER-PARAMETERS INVESTIGATION Here, we list all hyper-parameter setting of all above experiments. In this section, we discuss the learning rate α t sensitivity of AdaShift. We set α t ∈ {0.1, 0.01, 0.001} and let n = 10, β 1 = 0.9 and β 2 = 0.999. The are shown in Figure 9 and FIG1. Empirically, we found that when using the max spatial operation, the best learning rate for AdaShift is around ten times of Adam. In this section, we discuss the β 1 and β 2 sensitivity of AdaShift. We set α = 0.01, n = 10 and let β 1 ∈ {0, 0.9} and β 2 ∈ {0.9, 0.99, 0.999}. The are shown in FIG1 and FIG1. According to the , AdaShift holds a low sensitivity to β 1 and β 2. In some tasks, using the first moment estimation (with β 1 = 0.9 and n = 10) or using a large β 2, e.g., 0.999 can attain better performance. The suggested parameters setting is n = 10, β 1 = 0.9, β 2 = 0.999. In this section, we discuss the n sensitivity of AdaShift. Here we also test a extended version of first moment estimation where it only uses the latest m gradients (m ≤ n): DISPLAYFORM0 We set β 1 = 0.9, β 2 = 0.999. The are shown in FIG1, FIG1 and FIG1. In these experiments, AdaShift is fairly stable when changing n and m. We have not find a clear pattern on the performance change with respect to n and m. J TEMPORAL-ONLY AND SPATIAL-ONLYIn our proposed algorithm, we apply a spatial operation on the temporally shifted gradient g t−n to update v t: DISPLAYFORM1 It is based on the temporal independent assumption, i.e., g t−n is independent of g t. And according to our argument in Section 4.2, one can further assume every element in g t−n is independent of the i-th dimension of g t.We purposely avoid involving the spatial elements of the current gradient g t, where the independence might not holds: when a sample which is rare and has a large gradient appear in the mini-batch x t, the overall scale of gradient g t might increase. However, for the temporally already decorrelation g t−i, further taking the advantage of the spatial irrelevance will not suffer from this problem. We here provide extended experiments on two variants of AdaShift: (i) AdaShift (temporal-only), which only uses the vanilla temporal independent assumption and evaluate v t with: v t = β 2 v t−1 + (1 − β 2)g 2 t−n; (ii) AdaShift (spatial-only), which directly uses the spatial elements without temporal shifting. According to our experiments, AdaShift (temporal-only), i.e., without the spatial operation, is less stable than AdaShift. In some tasks, AdaShift (temporal-only) works just fine; while in some other cases, AdaShift (temporal-only) suffers from explosive gradient and requires a relatively small learning rate. The performance of AdaShift (spatial-only) is close to Adam. More experiments for AdaShift (spatial-only) are included in the next section. In this section, we extend the experiments and add the comparisons with Nadam and AdaShift (spatial-only). The are shown in FIG1, Figure19 and Figure20. According to these experiments, Nadam and AdaShift (spatial-only) share similar performence as Adam. Rahimi & Recht raise the point, at test of time talk at NIPS 2017, that it is suspicious that gradient descent (aka back-propagation) is ultimate solution for optimization. A ill-conditioned quadratic problem with Two Layer Linear Net is showed to be challenging for gradient descent based methods, while alternative solutions, e.g., Levenberg-Marquardt, may converge faster and better. The problem is defined as follows: DISPLAYFORM0 where A is some known badly conditioned matrix (k = 10 20 or 10 5), and W 1 and W 2 are the trainable parameters. We test SGD, Adam and AdaShift with this problem, the are shown in FIG1, Figure 24. It turns out as long as the training goes enough long, SGD, Adam, AdaShift all basically converge in this problem. Though SGD is significantly better than Adam and AdaShift. We would tend to believe this is a general issue of adaptive learning rate method when comparing with vanilla SGD. Because these adaptive learning rate methods generally are scale-invariance, i.e., the step-size in terms of g t /sqrt(v t) is basically around one, which makes it hard to converge very well in such a ill-conditioning quadratic problem. SGD, in contrast, has a step-size g t; as the training converges SGD would have a decreasing step-size, makes it much easier to converge better. The above analysis is confirmed with Figure 22 and Figure 23, with a decreasing learning rate, Adam and AdaShfit both converge very good. | We analysis and solve the non-convergence issue of Adam. | 607 | scitldr |
Most domain adaptation methods consider the problem of transferring knowledge to the target domain from a single source dataset. However, in practical applications, we typically have access to multiple sources. In this paper we propose the first approach for Multi-Source Domain Adaptation (MSDA) based on Generative Adversarial Networks. Our method is inspired by the observation that the appearance of a given image depends on three factors: the domain, the style (characterized in terms of low-level features variations) and the content. For this reason we propose to project the image features onto a space where only the dependence from the content is kept, and then re-project this invariant representation onto the pixel space using the target domain and style. In this way, new labeled images can be generated which are used to train a final target classifier. We test our approach using common MSDA benchmarks, showing that it outperforms state-of-the-art methods. A well known problem in computer vision is the need to adapt a classifier trained on a given source domain in order to work on another domain, i.e. the target. Since the two domains typically have different marginal feature distributions, the adaptation process needs to align the one to the other in order to reduce the domain shift . In many practical scenarios, the target data are not annotated and Unsupervised Domain Adaptation (UDA) methods are required. While most previous adaptation approaches consider a single source domain, in real world applications we may have access to multiple datasets. In this case, Multi-Source Domain Adaptation (MSDA) (; ; ;) methods may be adopted, in which more than one source dataset is considered in order to make the adaptation process more robust. However, despite more data can be used, MSDA is challenging as multiple domain shift problems need to be simultaneously and coherently solved. In this paper we tackle MSDA (unsupervised) problem and we propose a novel Generative Adversarial Network (GAN) for addressing the domain shift when multiple source domains are available. Our solution is based on generating artificial target samples by transforming images from all the source domains. Then the synthetically generated images are used for training the target classifier. While this strategy has been recently adopted in single-source UDA scenarios (; ; ; ;), we are the first to show how it can be effectively exploited in a MSDA setting. The holy grail of any domain adaptation method is to obtain domain invariant representations. Similarly, in multi-domain image-to-image translation tasks it is very crucial to obtain domain invariant representations in order to reduce the number of learned translations from O(N 2) to O(N), where N is the number of domains. Several domain adaptation methods (; ; ;) achieve domain-invariant representations by aligning only domain specific distributions. However, we postulate that style is the most important latent factor that describe a domain and need to be modelled separately for obtaining optimal domain invariant representation. More precisely, in our work we assume that the appearance of an image depends on three factors: i.e. the content, the domain and the style. The domain models properties that are shared by the elements of a dataset but which may not be shared by other datasets, whereas, the factor style represents a property that is shared among different parts of a single image and describes low-level features which concern a specific image. Our generator obtains the do-main invariant representation in a two-step process, by first obtaining style invariant representations followed by achieving domain invariant representation. In more detail, the proposed translation is implemented using a style-and-domain translation generator. This generator is composed of two main components, an encoder and a decoder. Inspired by in the encoder we embed whitening layers that progressively align the styleand-domain feature distributions in order to obtain a representation of the image content which is invariant to these factors. Then, in the decoder, we project this invariant representation onto a new domain-and-style specific distribution with Whitening and Coloring (W C) ) batch transformations, according to the target data. Importantly, the use of an intermediate, explicit invariant representation, obtained through W C, makes the number of domain transformations which need to be learned linear with the number of domains. In other words, this design choice ensures scalability when the number of domains increases, which is a crucial aspect for an effective MSDA method. Contributions. Our main contributions can be summarized as follows. (i) We propose the first generative model dealing with MSDA. We call our approach TriGAN because it is based on three different factors of the images: the style, the domain and the content. (ii) The proposed style-anddomain translation generator is based on style and domain specific statistics which are first removed from and then added to the source images by means of modified W C layers: Instance Whitening Transform (IW T), Domain Whitening Transform (DW T) , conditional Domain Whitening Transform (cDW T) and Adaptive Instance Whitening Transform (AdaIW T). Notably, the IW T and AdaIW T are novel layers introduced with this paper. (iii) We test our method on two MSDA datasets, Digits-Five and Office-Caltech10 , outperforming state-of-the-art methods. In this section we review previous approaches on UDA, considering both single source and multisource methods. Since, the proposed architecture is also related to deep models used for image-toimage translation, we also discuss related work on this topic. Single Source UDA. Single source UDA approaches assume a single labeled source domain and can be broadly classified under three main categories, depending upon the strategies adopted to cope with the domain-shift problem. The first category utilizes the first and second order statistics to model the source and target feature distributions. For instance, (; ; ;) minimize the Maximum Mean Discrepancy, i.e. the distance between the mean of feature distributions between the two domains. On the other hand,;; ) achieve domain invariance by aligning the second-order statistics through correlation alignment. Differently, (; ;) reduce the domain shift by domain alignment layers derived from batch normalization (BN) . This idea has been recently extended in , where grouped-feature whitening (DWT) is used instead of feature standardization as in BN. Contrarily, in our proposed encoder we use the W C transform, which we adapt to work in a generative network. In addition, we also propose other style and domain dependent batch-based normalizations (i.e., IW T, cDW T and AdaIW T). The second category of methods aim to build domain-agnostic representations by means of an adversarial learning-based approach. For instance, discriminative domain-invariant representations are constructed through a gradient reversal layer in . Similarly, the approach in ) utilizes a domain confusion loss to promote the alignment between the source and the target domain. A third category of methods use adversarial learning in a generative framework (i.e., GANs to reconstruct artificial source and/or target images and perform domain adaptation. Notable approaches are SBADA-GAN , CyCADA ), CoGAN , I2I Adapt and Generate To Adapt (GTA) . While these generative methods have been shown to be very successful in UDA, none of them deals with a multi-source setting. Indeed, extending these approaches to deal with multiple source domains is not trivial, because the construction of O(N 2) one-to-one translation generators and discriminator networks would most likely dramatically increase the number of parameters which need to be trained. Figure 1: An overview of the TriGAN generator. We schematically show 3 domains {T, S 1, S 2} -objects with holes, 3D objects and skewered objects, respectively. The content is represented by the object's shape -square, circle or triangle. The style is represented by the color: each image input to G has a different color and each domain has it own set of styles. First, the encoder E creates a styleinvariant representation using IWT blocks. DWT blocks are then used to obtain a domain-invariant representation. Symmetrically, the decoder D brings back domain-specific information with cDWT blocks (for simplicity we show only a single domain, T). Finally, we apply a reference style. The reference style is extracted using style path and it is applied using Adaptive IWT blocks. Multi-source UDA. deal with multiple-source knowledge transfer by borrowing knowledge from the target k nearest-neighbour sources. Similarly, a distribution-weighed combining rule is proposed in to construct a target hypothesis as a weighted combination of source hypotheses. Recently, Deep Cocktail Network (DCTN) uses the distribution-weighted combining rule in an adversarial setting. A Moment Matching Network (M 3 SDA) is introduced in for reducing the discrepancy between the multiple source and the target domains. Differently from these methods which operate in a discriminative setting, our method relies on a deep generative approach for MSDA. Image-to-image Translation. Image-to-image translation approaches, i.e. the methods that learn how to transform an image from one domain to another, possibly keeping its semantics, are the basis of our method. In ) the pix2pix network translates images under the assumption that paired images in the two domains are available at training time. In contrast, CycleGAN ) can learn to translate images using unpaired training samples. Note that, by design, these methods work with two domains. ComboGAN partially alleviates this issue by using N generators for translations among N domains. Our work is also related to StarGAN which handles unpaired image translation amongst N domains (N ≥ 2) through a single generator. However, StarGAN achieves image translation without explicitly forcing representations to be domain invariant and this may lead to significant reduction of network representation power as the number of domains increases. On the other hand, our goal is to obtain an explicit, intermediate image representation which is style-and-domain independent. We use IWT and DWT to achieve this. We also show that this invariant representation can simplify the re-projection process onto a desired style and target domain. This is achieved through AdaIW T and cDW T which into very realistic translations amongst domains. Very recently, a whitening and colouring based image-to-image translation method was proposed in , where the whitening operation is weight-based. Specifically, the whitening operation is approximated by enforcing the convariance matrix, computed from the intermediate features, to be equal to the identity matrix. Conversely, our whitening layers are data dependent and they use the Cholesky decomposition to compute the whitening matrices from the input samples in a closed form, thereby eliminating the need for additional ad-hoc losses. In this section we describe the proposed approach for MSDA. We first provide an overview of our method and introduce the notation adopted throughout the paper (Sec. 3.1). Then we describe the TriGAN architecture (Sec. 3.2) and our training procedure (Sec.3.3). In the MSDA scenario we have access to N labeled source datasets {S j} N j=1, where, and a target unlabeled dataset T = {x k} nt k=1. All the datasets (target included) share the same categories and each of them is associated to a domain D t, respectively. Our final goal is to build a classifier for the target domain D t exploiting the data in {S j} N j=1 ∪ T. Our method is based on two separate training steps. We initially train a generator G which learns how to change the appearance of a real input image in order to adhere to a desired domain and style. Importantly, our G learns mappings between every possible pair of image domains. Once G is trained, we use it to generate target data having the same content of the source data, thus creating a new, labeled, target dataset, which is finally used to train a target classifier C. In more detail, G is trained using {S j} N j=1 ∪ T, however no class label is involved in this phase and T is treated in the same way as the other domain datasets. As mentioned in Sec. 1, G is composed of an encoder E and a decoder D (Fig. 1). The role of E is to whiten, i.e., to remove, both domain-specific and style-specific aspects of the input image features in order to obtain domain and style invariant representations. Conversely and symmetrically, D needs to progressively project the domain-andstyle invariant features generated by E onto a domain-and-style specific space. At training time, G takes as input a batch of images B = {x 1, ..., x m} with corresponding domain labels L = {l 1, ..., l m}, where x i belongs to the domain D li and, and a batch of style images and has the same style of image x O i. The TriGAN architecture is made of a generator network G and a discriminator D P. As stated above, G comprises an encoder E and decoder D, which will be described in (Sec. 3.2.2-3.2.3). The discriminator D P is based on the Projection Discriminator . Before describing the details of G, we briefly review the W C transform ) (Sec. 3.2.1) which is used as the basic operation in our proposed batch-based feature transformations. h×w×d be the tensor representing the activation values of the convolutional feature maps in a given layer corresponding to the input image x, with d channels and h × w spatial locations. We treat each spatial location as a d-dimensional vector, in this way each image x i contains a set of vectors X i = {v 1, ..., v h×w}. With a slight abuse of the notation, we use.., v h×w×m }, which includes all the spatial locations in all the images in a batch. The W C transform is a multivariate extension of the per-dimension normalization and shiftscaling transform (BN) proposed in and widely adopted in both generative and discriminative networks. W C can be described by: where: In Eq. 2, µ B is the centroid of the elements in B, while W B is such that: B, where Σ B is the covariance matrix computed using B. The of applying Eq. 2 to the elements of B, is a set of whitened featuresB = {v 1, ...,v h×w×m}, which lie in a spherical distribution (i.e., with a covariance matrix equal to the identity matrix). On the other hand, Eq. 1 performs a coloring transform, i.e. projects the elements inB onto a learned multivariate Gaussian distribution. While µ B and W B are computed from the elements in B, Eq. 1 depends on the learnable d dimensional vector β and d × d dimensional matrix Γ. Eq. 1 is a linear operation and can be simply implemented using a convolutional layer with kernel size 1 × 1. In this paper we use the WC transform in our encoder E and decoder D, in order to first obtain a style-and-domain invariant representation for each x i ∈ B, and then transform this representation The encoder E is composed of a sequence of standard Convolution k×k -N ormalization -ReLU -AverageP ooling blocks and some ResBlocks (more details in Appendix B), in which we replace the common BN layers with our proposed normalization modules, which are detailed below. Obtaining Style Invariant Representations. In the first two blocks of E we whiten first and secondorder statistics of the low-level features of each X i ⊆ B, which are mainly responsible for the style of an image . To do so, we propose the Instance Whitening Transform (IW T), where the term instance is inspired by Instance Normalization (IN) and highlights that the proposed transform is applied to a set of features extracted from a single image Eq. 3 implies that whitening is performed using an image-specific feature centroid µ Xi and covariance matrix Σ Xi, which represent the first and second-order statistics of the low-level features of x i. Coloring is based on the parameters β and Γ, which do not depend on x i or l i. The coloring operation is the analogous of the shift-scaling per-dimension transform computed in BN just after feature standardization and is necessary to avoid decreasing the network representation capacity ). Obtaining Domain Invariant Representations. In the subsequent blocks of E we whiten first and second-order statistics which are domain specific. For this operation we adopt the Domain Whitening Transform (DW T) proposed in . Specifically, for each X i ⊆ B, let l i be its domain label (see Sec. 3.1) and let B li ⊆ B be the subset of feature which have been extracted from all those images in B which share the same domain label. Then, for each v j ∈ B li: Similarly to Eq. 3, Eq. 4 performs whitening using a subset of the current feature batch. Specifically, all the features in B are partitioned depending on the domain label of the image they have been extracted from, so obtaining B 1, B 2,..., etc, where all the features in B l belongs to the images from domain D l. Then, first and second order statistics (µ B l, Σ B l) are computed thus effectively projecting each v j ∈ B li onto a domain-invariant spherical distribution. A similar idea was recently proposed in in a discriminative network for single-source UDA. However, differently from , we also use coloring by re-projecting the whitened features onto a new space governed by a learned multivariate distribution. This is done using the learnable parameters β and Γ which do not depend on l i. Our decoder D is functionally and structurally symmetric with respect to E: it takes as input domain and style invariant features computed by E and projects these features onto the desired domain with style extracted from desired image x O i. Similarly to E, D is a sequence of ResBlocks and a few U psampling -N ormalizationReLU -Convolution k×k blocks (more details in Appendix B). Similarly to Sec. 3.2.2, in the N ormalization layers we replace BN with our proposed feature normalization approaches, which are detailed below. Projecting Features onto a Domain-specific Distribution. Apart from the last two blocks of D (see below), all the other blocks are dedicated to project the current set of features onto a domain-specific subspace. This subspace is learned from data using domain-specific coloring parameters (β l, Γ l), where l is the label of the corresponding domain. To this purpose we introduce the conditional Domain Whitening Transform (cDW T), where the term "conditional" specifies that the coloring step is conditioned on the domain label l. In more detail: Similarly to Eq. 4, we first partition B into B 1, B 2,..., etc. However, the membership of v j ∈ B to B l is decided taking into account the desired output domain label l O i for each image rather than its original domain as in case of to Eq. 4. Specifically, let v j ∈ X i and the output domain is given by the label l Once B has been partitioned we define cDW T as follows: Note that, after whitening, and differently from Eq. 4, coloring in Eq. 5 is performed using domainspecific Applying a Specific Style. In order to apply a given style to x i, we extract the style from image x O i using the Style Path (see Fig. 1). Style Path consists of two Convolution k×k -IW T -ReLU -AverageP ooling blocks (which shares the parameters with the first two layers of encoder) and a MultiLayer Perceptron (MLP) F. Following we describe a style using first and second order statistics, which are extracted using the IW T blocks. Then we use F to adapt these statistics to the domain-specific representation obtained as the output of the previous step. In fact, in principle, for each v j ∈ X O i, the W hitening operation inside the IW T transform could be "inverted" using: Indeed, the coloring operation (Eq. 1) is the inverse of whitening (Eq. 2). However, the elements of X i now lie in a feature space different from the output space of Eq. 3, thus the transformation defined by Style Path needs to be adapted. For this reason, we use a MLP (F) which implements this adaptation: Note that, in Eq. 7, has been generated, we use it as the coloring parameters of our Adaptive IWT (AdaIW T): Eq. 8 imposes style-specific first and second order statistics to the features of the last blocks of D in order to mimic the style of x O i. GAN Training. For the sake of clarity, in the rest of the paper we use a simplified notation for G, in which G takes as input only one image instead of a batch. Specifically, ) be the generated image, starting from x i (x i ∈ D li) and with desired output domain l O i and style image x O i. G is trained using the combination of three different losses, with the goal of changing the style and the domain of x i while preserving its content. First, we use an adversarial loss based on the Projection Discriminator (D P), which is conditioned on labels (domain labels, in our case) and uses a hinge loss: The second loss is the Identity loss proposed in ), and which in our framework is implemented as follows: In Eq. 11, G computes an identity transformation, being the input and the output domain and style the same. After that, a pixel-to-pixel L 1 norm is computed. Finally, we propose to use a third loss which is based on the rationale that the generation process should be equivariant with respect to a set of simple transformations which preserve the main content of the images (e.g., the foreground object shape). Specifically, we use the set of the affine transformations {h(x; θ)} of image x which are defined by the parameter θ (θ is a 2D transformation matrix). The affine transformation is implemented by differentiable billinear kernel as in . The Equivariance loss is: In Eq. 12, for a given image x i, we randomly choose a parameter θ i and we apply h(·; θ i) tô Then, using the same θ i, we apply h(·; θ i) to x i and we get x i = h(x i ; θ i), which is input to G in order to generate a second image. The two generated images are finally compared using the L 1 norm. This is a form of self-supervision, in which equivariance to geometric transformations is used to extract semantics. Very recently a similar loss has been proposed in , where equivariance to affine transformations is used for image co-segmentation. The complete loss for G is: Classifier Training. Once G is trained, we use it to artificially create a labeled training dataset for the target domain. Specifically, for each S j and each (x i, y i) ∈ S j, we randomly pick one image x t from T which is used as the style-image reference, and we generate: where N + 1 is fixed and indicates the target domain D t label (see Sec. 3.1). and the process is iterated. Finally, we train a classfier C on T L using the cross-entropy loss: In this section we describe the experimental setup and then we evaluate our approach using MSDA datasets. We also present an ablation study in which we analyse the impact of each of TriGAN component on the classification accuracy. In our experiments we consider two common domain adaptation benchmarks, namely the DigitsFive benchmark and the Office-Caltech . The Digits-Five is composed of five digit-recognition datasets: USPS , MNIST , MNIST-M , SVHN and Synthetic numbers datasets (SYNDIGITS). SVHN contains images from Google Street View of real-world house numbers. Synthetic numbers includes 500K computer-generated digits with different sources of variations (i.e. position, orientation, color, blur). USPS is a dataset of digits scanned from U.S. envelopes, MNIST is a popular benchmark for digit recognition and MNIST-M is its colored counterpart. We adopt the experimental protocol described in : in each domain the train/test split is composed of a subset of 25000 images for training and 9000 for testing. For USPS the entire dataset is used. The Office-Caltech is a domain-adaptation benchmark obtained selecting the subset of 10 categories shared between the Office31 and the Caltech256 datasets. It contains 2533 images, about half of which belong to Caltech256. There are four different domains: Amazon (A), DSLR (D), Webcam (W) and Caltech256 (C). We provide architecture details about our generator G and discriminator D P in the Appendix B. We train TriGAN for 100 epochs using the Adam optimizer with the learning rate set to 1e-4 for the G and 4e-4 for the D P as in . The loss weighing factor λ in Eqn. 13 is set to 10 as in ). All other hyperparameters are chosen by crossvalidating on the MNIST-M, USPS, SVHN, SYNDIGITS → MNIST adaptation setting and are used in all the other settings. For the Digits-Five experiments we use a mini-batch of size 256 for TriGAN training. Due to the difference in image resolution and image channels, the images of all the domains are converted to 32 × 32 RGB. For a fair comparison, for the final target classifier C we use exactly the same network architecture used in . In the Office-Caltech10 experiments we downsample the images to 164 × 164 to accommodate more samples in a mini-batch. We use a mini-batch of size 24 for training with 1 GPU. For the back-bone target classifier C we use the ResNet101 architecture used by. The weights are initialized with a network pre-trained on the ILSVRC-2012 dataset . In our experiments we remove the output layer and we replace it with a randomly initialized fully-connected layer that has 10 logits, one per each class of the OfficeCaltech10 dataset. C is trained with Adam with an initial learning rate of 1e-5 for the randomly initialized last layer and 1e-6 for all other layers. Since there are only a few training data in the T L dataset, we also use {S j} N j=1 for training C. In this section we analyse our proposal using an ablation study and we compare with MSDA stateof-the-art methods. In Appendix A we show our qualitative . In this section we compare our method with previous MSDA approaches. Tab. 1 and Tab. 2 show the on the Digits-Five and Office-Caltech10 datset, respectively. Table 1 shows that TriGAN achieves an average accuracy of 90.08% which is higher than all other methods. M 3 SDA is better in the mm, up, sv, sy → mt and in the mt, mm, sv, sy → up settings, where TriGAN is the second best. In all the other settings, TriGAN outperforms all other approaches. As an example, in the mt, up, sv, sy → mm setting, TriGAN is better than the second best method M 3 SDA by a significant margin of 10.38%. For the StarGAN baseline, synthetic images are generated in the target domain and a target classifier is trained using our protocol described in Sec. 3.3. StarGAN, despite known to work well for aligned face translation, fails drastically when digits are concerned. This shows the importance of a well-designed generator that enforces domain invariant representations in the MSDA setting when there is a significant domain shift. Models Table 1: Classification accuracy (%) on Digits-Five experiments. MNIST-M, MNIST, USPS, SVHN, Synthetic Digits are abbreviated as mm, mt, up, sv and sy respectively. The best value is in bold and the second best is underlined. Finally, we also experimented using the Office-Caltech10, which is considered to be difficult for reconstruction-based GAN methods because of the high-resolution images. Although the dataset is quite saturated, TriGAN achieves a classification accuracy of 97.0%, outperforming all the other methods and beating the previous state-of-the-art approach (M 3 SDA) by a margin of 0.6% on average (see Tab. 2). Table 2: Classification accuracy (%) on Office-Caltech10 dataset. ResNet-101 pre-trained on ImageNet is used as the backbone network. The best value is in bold and the second best is underlined. In this section we analyse the different components of our method and study in isolation their impact on the final accuracy. Specifically, we use the Digits-Five dataset and the following models: i) Model A, which is our full model containing the following components: IWT, DWT, cDWT, AdaIWT and L Eq. ii) Model B, which is similar to Model A except we replace L Eq with the cycle-consistency loss L Cycle of CycleGAN ). iii) Model C, where we replace IWT, DWT, cDWT and AdaIWT of Model A with IN , BN , conditional Batch Normalization (cBN) and Adaptive Instance Normalization (AdaIN) . This comparison highlights the difference between feature whitening and feature standardisation. iv) Model D, which ignores the style factor. Specifically, in Model D, the blocks related to the style factor, i.e., the IWT and the AdaIWT blocks, are replaced by DWT and cDWT blocks, respectively. v) Model E, in which the style path differs from Model A in the way the style is applied to the domain-specific representation. Specifically, we remove the MLP F and we directly apply (µ). vi) Finally, Model F represents no-domain assumption (e.g. the DWT and cDWT blocks are replaced with standard WC blocks). Tab. 3 shows that Model A outperforms all the ablated models. Model B shows that L Cycle is detrimental for the accuracy because G may focus on meaningless information to reconstruct back the image. Conversely, the affine transformations used in case of L Eq, force G to focus on the shape of the content of the images. Model C is outperformed by model A, demonstrating the importance of feature whitening over feature standardisation, corroborating the findings of in a pure-discriminative setting. Moreover, the no-style assumption in Model D hurts the classification accuracy by a margin of 1.76% when compared with Model A. We believe this is due to the fact that, when only domain-specific latent factors are modeled but instance-specific style information is missing in the image translation process, then the diversity of translations decreases, consequently reducing the final accuracy. Model E shows the need of using the proposed style path. Finally, Model F shows that having a separate factor for domain yields better performance Our proposed method can be used for multi-domain image-to-image translation tasks. We conduct experiments on Alps Seasons dataset which consists of images of Alps mountain range belonging to four different domains. Fig. 2 shows some images generated using our generator on the Alps Seasons. For this experiment we compare our generator with StarGAN and report the FID metrics for the generated images. FID measures the realism of generated images and it is desirable to have a lower FID score. The FID is computed considering all the real samples in the target domain and generating equivalent number of synthetic images in the target domain. It can be observed from Tab. 4 that the FID scores of our approach is significantly lower than that of StarGAN. This further highlights the fact that explicit enforcing of domain and style invariant representation is essential for multi-domain translation. 5 In this work we proposed TriGAN, an MSDA framework which is based on data-generation from multiple source domains using a single generator. The underlying principle of our approach to to obtain domain-style invariant representations in order to simplify the generation process. Specifically, our generator progressively removes style and domain specific statistics from the source images and then re-projects the so obtained invariant representation onto the desired target domain and styles. We obtained state-of-the-art on two MSDA datasets, showing the potentiality of our approach. We performed a detailed ablation study which shows the importance of each component of the proposed method. Some sample translations of our G are shown in Fig. 3. For example, in Fig. 3 (a) when the SVHN digit "six" with side-digits is translated to MNIST-M the cDWT blocks re-projects it to MNIST-M domain (i.e., single digit without side-digits) and the AdaIWT block applies the instance-specific style of the digit "three" (i.e., blue digit with red ) to yield a blue "six" with red . Similar trends are also observed in Fig. 3 (b). Adaptive Instance Whitening (AdaIWT) blocks. The AdaIWT blocks are analogous to the IWT blocks except from the IWT which is replaced by the AdaIWT. The AdaIWT block is a sequence: U psampling m×m − Convolution k×k − AdaIW T − ReLU, where m = 2 and k = 3. AdaIWT also takes as input the coloring parameters (Γ, β) (See Sec. 3.2.3) and Fig. 4 (b) ). Two AdaIWT blocks are consecutively used in D. The last AdaIWT block is followed by a Convolution 5×5 layer. Style Path. The Style Path is composed of: (Fig. 4 (c) ). The output of the Style Path is (β 1 Γ 1) and (β 2 Γ 2), which are input to the second and the first AdaIWT blocks, respectively (see Fig. 4 (b) ). The M LP is composed of five fully-connected layers with 256, 128, 128, 256 neurons, with the last fully-connected layer having a number of neurons equal to the cardinality of the coloring parameters (β Γ). Domain Whitening Transform (DWT) blocks. The schematic representation of a DWT block is shown in Fig. 5 (a). For the DWT blocks we adopt a residual-like structure: We also add identity shortcuts in the DWT residual blocks to aid the training process. Conditional Domain Whitening Transform (cDWT) blocks. The proposed cDWT blocks are schematically shown in Fig. 5 (b). Similarly to a DWT block, a cDWT block contains the following layers: cDW T − ReLU − Convolution 3×3 − cDW T − ReLU − Convolution 3×3. Identity shortcuts are also used in the cDWT residual blocks. All the above blocks are assembled to construct G, as shown in Fig. 6. Specifically, G contains two IWT blocks, one DWT block, one cDWT block and two AdaIWT blocks. It also contains the Style Path and 2 Convolution 5×5 (one before the first IWT block and another after the last AdaIWT block), which is omitted in Fig. 6 for the sake of clarity. {Γ 1, β 1, Γ 2, β 2} are computed using the Style Path. For the discriminator D P architecture we use a Projection Discriminator . In D P we use projection shortcuts instead of identity shortcuts. In Fig 7 we schematically show a discriminator block. D P is composed of 2 such blocks. We use spectral normalization in D P. Since, our proposed TriGAN has a generic framework and can handle N -way domain translations, we also conduct experiments for Single-Source UDA scenario where N = 2 and the source domain is grayscale MNIST. We consider the following UDA settings with the digits dataset: C.1 DATASETS MNIST → USPS. The MNIST dataset contains grayscale images of handwritten digits 0 to 9. The pixel resolution of MNIST digits is 28 × 28. The USPS contains similar grayscale handwritten digits except the resolution is 16 × 16. We up-sample images from both domains to 32 × 32 during 91.2 62.0 -ADDA 89.4 --PixelDA 95.9 98.2 -UNIT 95.9 --SBADA-GAN 97.6 99.4 61.1 GenToAdapt 92.5 -36.4 CyCADA 94.8 --I2I Adapt 92.1 --TriGAN (Ours) 98.0 95.7 66.3 Table 5: Classification Accuracy (%) of GAN-based methods on the Single-source UDA setting for Digits Recognition. The best number is in bold and the second best is underlined. In this section we compare our proposed TriGAN with GAN-based state-of-the-art methods, both with adversarial learning based approaches and reconstruction-based approaches. Tab. 5 reports the performance of our TriGAN alongside the obtained from the following baselines: Domain Adversarial Neural Network (DANN), Coupled generative adversarial networks (CoGAN), Adversarial discriminative domain adaptation ) (ADDA), Pixel-level domain adaptation (PixelDA), Unsupervised image-to-image translation networks ) (UNIT), Symmetric bi-directional adaptive gan (SBADA-GAN), Generate to adapt (GenToAdapt), Cycle-consistent adversarial domain adaptation ) (CyCADA) and Image to image translation for domain adaptation (I2I Adapt). As can be seen from Tab. 5 TriGAN does better in two out of three adaptation settings. It is only worse in the MNIST → MNIST-M setting where it is the third best. It is to be noted that TriGAN does significantly well in MNIST → SVHN adaptation which is particularly considered as a hard setting. TriGAN is 5.2% better than the second best method SBADA-GAN for MNIST → SVHN. | In this paper we propose generative method for multisource domain adaptation based on decomposition of content, style and domain factors. | 608 | scitldr |
Inferring the most likely configuration for a subset of variables of a joint distribution given the remaining ones – which we refer to as co-generation – is an important challenge that is computationally demanding for all but the simplest settings. This task has received a considerable amount of attention, particularly for classical ways of modeling distributions like structured prediction. In contrast, almost nothing is known about this task when considering recently proposed techniques for modeling high-dimensional distributions, particularly generative adversarial nets (GANs). Therefore, in this paper, we study the occurring challenges for co-generation with GANs. To address those challenges we develop an annealed importance sampling (AIS) based Hamiltonian Monte Carlo (HMC) co-generation algorithm. The presented approach significantly outperforms classical gradient-based methods on synthetic data and on CelebA. While generative adversarial nets (GANs) and variational auto-encoders (VAEs) model a joint probability distribution which implicitly captures the correlations between multiple parts of the output, e.g., pixels in an image, and while those methods permit easy sampling from the entire output space domain, it remains an open question how to sample from part of the domain given the remainder? We refer to this task as co-generation. To enable co-generation for a domain unknown at training time, for GANs, optimization based algorithms have been proposed. Intuitively, they aim at finding that latent sample which accurately matches the observed part. However, successful training of the GAN leads to an increasingly ragged energy landscape, making the search for an appropriate latent variable via backpropagation through the generator harder and harder until it eventually fails. To deal with this ragged energy landscape during co-generation, we develop a method using an annealed importance sampling (AIS) based Hamiltonian Monte Carlo (HMC) algorithm, which is typically used to estimate (ratios of) the partition function. Rather than focus on the partition function, the proposed approach leverages the benefits of AIS, i.e., gradually annealing a complex probability distribution, and HMC, i.e., avoiding a localized random walk. We evaluate the proposed approach on synthetic data and imaging data (CelebA), showing compelling via MSE and MSSIM metrics. For more details and please see our main conference paper. In the following we first motivate the problem of co-generation before we present an overview of our proposed approach and discuss the details of the employed Hamiltonian Monte Carlo method. Assume we are given a well trained generatorx = G θ (z), parameterized by θ, which is able to produce samplesx from an implicitly modeled distribution p G (x|z) via a transformation of embeddings z. Further assume we are given partially observed data x o while the remaining part x h of the data x = (x o, x h) is latent. To reconstruct the latent parts of the data x h from available observations x o, a program can be formulated as follows: where G θ (z) o denotes the restriction of the generated sample G θ (z) to the observed part. Upon solving the program given in Eq., we obtain an estimate for the missing datax However, in practice, Eq. turns out to be extremely hard to address, particularly if the generator G θ (z) is very well trained. To see this, consider as an example a generator operating on a 2-dimensional latent space z = (z 1, z 2) and 2-dimensional data x = (x 1, x 2) (blue points in Fig. 2(a) ). We use h = 1 and let x o = x 2 = 0. In the first row of Fig. 1 we illustrate the loss surface of the objective given in Eq. obtained when using a generator G θ (z) trained on the original 2-dimensional data for 500, 1.5k, 2.5k and 15k iterations (columns in Fig. 1). We observe the latent space to become increasingly ragged, exhibiting folds that clearly separate different data regimes. First (e.g., gradient descent (GD)) or second order optimization techniques cannot cope easily with such a loss landscape and likely get trapped in local optima. We observe GD (red trajectory in Fig. 1 first row and loss in second row) to get stuck in a local optimum as the loss fails to decrease to zero once the generator better captures the data. To prevent those local-optima issues for co-generation, we propose an annealed importance-sampling (AIS) based Hamiltonian Monte Carlo (HMC) method in the following (Alg. 1). In order to reconstruct the hidden portion x h of the data. To obtain samplesẑ following the posterior distribution p(z|x o), we use annealed importance sampling (AIS) to gradually approach the complex and often high-dimensional posterior distribution p(z|x o) by simulating a Markov Chain starting from the prior distribution p(z) = N (z|0, I), a standard normal distribution (zero mean and unit variance). Formally, we define an annealing schedule for the parameter β t from β 0 = 0 to β T = 1. At every time step t ∈ {1, . . ., T} we refine the samples drawn at the previous timestep t − 1 so as to represent the distributionp t (z|x o) = p(z|x o) βt p(z) 1−βt. We use a sigmoid schedule for the parameter β t. To successively refine the samples we use Hamilton Monte Carlo (HMC) sampling because a proposed update can be far from the current sample while still having a high acceptance probability. We use 0.01 as the leapfrog step size and employ 10 leapfrog updates per HMC loop for the synthetic 2D dataset and 20 leapfrog updates for real dataset at first. The acceptance rate is 0.65, as recommended by Neal. Hamilton Monte Carlo (HMC) is capable of traversing folds in an energy landscape. For this, HMC methods trade potential energy U t (z) = − logp t (z|x o) with kinetic energy K t (v). ∀z ∈ Z compute new proposal sample using leapfrog integration on Hamiltonian 8: ∀z ∈ Z use Metropolis Hastings to check whether to accept the proposal and update Z 9: end for 10: end for 11: Return: Z HMC defines a Hamiltonian H(z, v) = U (z) + K(v) or conversely a joint probability distribution log p(z, v) ∝ −H(z, v) and proceeds by iterating three steps M times. In a first step, the Hamiltonian is initialized by randomly sampling the momentum variable v, typically using a standard Gaussian. In the second step, (z *, v *) are proposed via leapfrog integration to move along a hypersurface of the Hamiltonian. In the final third step we decide whether to accept the proposal (z *, v *) computed via leapfrog integration. Formally, we accept the proposal with probability min{1, exp (−H(z If the proposed state (z *, v *) is rejected, the m + 1-th iteration reuses z, otherwise z is replaced with z * in the m + 1-th iteration. This process is shown in Alg. 1 line 6, 7 and 8. Baselines: In the following, we evaluate the proposed approach on synthetic and imaging data. We use two GD baselines, employing different initialization methods. The first one samples a single z randomly. The second picks that one sample z from 5000 points which best matches the objective given in Eq. initially. To illustrate the advantage of our proposed method over the common baseline, we first demonstrate our on 2-dimensional synthetic data. Specifically, the 2-dimensional data x = (x 1, x 2) is drawn from a mixture of five equally weighted Gaussians each with a variance of 0.02, the means of In this experiment, we aim to reconstruct x = (x 1, x 2), given x o = x 2 = 0. The optimal solution for the reconstruction isx =, where the reconstruction error should be 0. However, as discussed in reference to Fig. 1 earlier, we observe that energy barriers in the Z-space complicate optimization. In contrast, our proposed AIS co-generation method only requires one initialization to achieve the desired after 6, 000 AIS loops, as shown in Fig. 2 (15000 (d) ). Specifically, reconstruction with generators trained for a different number of epochs (500, 1.5k and 15k) are shown in the rows. The samples obtained from the generator for the data (blue points in column (a)) are illustrated in column (a) using black color. Using the respective generator to solve the program given in Eq. via GD yields highlighted with yellow color in column (b). The empirical reconstruction error frequency for this baseline is given in column (c). The and the reconstruction error frequency obtained with Alg. 1 are shown in columns (d, e). We observe significantly better and robustness to initialization. In Fig. 3 we show for 100 samples that Alg. 1 moves them across the energy barriers during the annealing procedure, illustrating the benefits of AIS based HMC over GD. To validate our method on real data, we evaluate on CelebA, using MSE and MSSIM metrics. We use the progressive GAN architecture. The size of the input is 512 and the size of the output is 128 × 128. We randomly mask blocks of width and height ranging from 30 to 60. Then we use Alg. 1 for reconstruction with 500 HMC loops. In Fig. 3 (a,b), we observe that Alg. 1 outperforms over both baselines for all GAN training iterations on both MSSIM and MSE metrics. In Fig. 4 we show generated by both baselines and Alg. 1. We propose a co-generation approach, i.e., we complete partially given input data, using annealed importance sampling (AIS) based on the Hamiltonian Monte Carlo (HMC). Different from classical optimization based methods, specifically GD, which get easily trapped in local optima when solving this task, the proposed approach is much more robust. Importantly, the method is able to traverse large energy barriers that occur when training generative adversarial nets. Its robustness is due to AIS gradually annealing a probability distribution and HMC avoiding localized walks. We show additional for real data experiments. We observe our proposed algorithm to recover masked images more accurately than baselines and to generate better high-resolution images given low-resolution images. We show masked CelebA (Fig. 5) and LSUN (Fig. 6) recovery for baselines and our method, given a Progressive GAN generator. Note that our algorithm is pretty robust to the position of the z initialization, since the generated are consistent in Fig. 5. (a) (b) (c) | Using annealed importance sampling on the co-generation problem. | 609 | scitldr |
Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental . Generative modelling is a cornerstone of machine learning and has received increasing attention. Recent models like variational autoencoders (VAEs) BID32 BID45 and generative adversarial nets (GANs) BID21 BID25, have delivered impressive advances in performance and generated a lot of excitement. Generative models can be classified into two categories: prescribed models and implicit models BID12 BID40. Prescribed models are defined by an explicit specification of the density, and so their unnormalized complete likelihood can be usually expressed in closed form. Examples include models whose complete likelihoods lie in the exponential family, such as mixture of Gaussians BID18, hidden Markov models BID5, Boltzmann machines BID27. Because computing the normalization constant, also known as the partition function, is generally intractable, sampling from these models is challenging. On the other hand, implicit models are defined most naturally in terms of a (simple) sampling procedure. Most models take the form of a deterministic parameterized transformation T θ (·) of an analytic distribution, like an isotropic Gaussian. This can be naturally viewed as the distribution induced by the following sampling procedure:1. Sample z ∼ N (0, I) 2. Return x:= T θ (z)The transformation T θ (·) often takes the form of a highly expressive function approximator, like a neural net. Examples include generative adversarial nets (GANs) BID21 BID25 and generative moment matching nets (GMMNs) BID36 BID16. The marginal likelihood of such models can be characterized as follows: DISPLAYFORM0 where φ(·) denotes the probability density function (PDF) of N (0, I).In general, attempting to reduce this to a closed-form expression is hopeless. Evaluating it numerically is also challenging, since the domain of integration could consist of an exponential number of disjoint regions and numerical differentiation is ill-conditioned. These two categories of generative models are not mutually exclusive. Some models admit both an explicit specification of the density and a simple sampling procedure and so can be considered as both prescribed and implicit. Examples include variational autoencoders BID32 BID45, their predecessors BID38 BID10 and extensions BID11, and directed/autoregressive models, e.g., BID42 BID6 BID33 van den ). Maximum likelihood BID19 BID17 is perhaps the standard method for estimating the parameters of a probabilistic model from observations. The maximum likelihood estimator (MLE) has a number of appealing properties: under mild regularity conditions, it is asymptotically consistent, efficient and normal. A long-standing challenge of training probabilistic models is the computational roadblocks of maximizing the log-likelihood function directly. For prescribed models, maximizing likelihood directly requires computing the partition function, which is intractable for all but the simplest models. Many powerful techniques have been developed to attack this problem, including variational methods BID31, contrastive divergence BID26 ), score matching BID30 and pseudolikelihood maximization BID9, among others. For implicit models, the situation is even worse, as there is no term in the log-likelihood function that is in closed form; evaluating any term requires computing an intractable integral. As a , maximizing likelihood in this setting seems hopelessly difficult. A variety of likelihood-free solutions have been proposed that in effect minimize a divergence measure between the data distribution and the model distribution. They come in two forms: those that minimize an f -divergence, and those that minimize an integral probability metric BID41. In the former category are GANs, which are based on the idea of minimizing the distinguishability between data and samples (; BID24 . It has been shown that when given access to an infinitely powerful discriminator, the original GAN objective minimizes the Jensen-Shannon divergence, the − log D variant of the objective minimizes the reverse KL-divergence minus a bounded quantity, and later extensions BID43 minimize arbitrary f -divergences. In the latter category are GMMNs which use maximum mean discrepancy (MMD) BID22 as the witness function. In the case of GANs, despite the theoretical , there are a number of challenges that arise in practice, such as mode dropping/collapse BID21, vanishing gradients ) and training instability BID21. A number of explanations have been proposed to explain these phenomena and point out that many theoretical rely on three assumptions: the discriminator must have infinite modelling capacity BID21, the number of samples from the true data distribution must be infinite ) and the gradient ascent-descent procedure BID4 ) can converge to a global pure-strategy Nash equilibrium BID21. When some of these assumptions do not hold, the theoretical guarantees do not necessarily apply. A number of ways have been proposed that alleviate some of these issues, e.g., (; ; BID13 BID15 BID28), but a way of solving all three issues simultaneously remains elusive. In this paper, we present an alternative method for estimating parameters in implicit models. Like the methods above, our method is likelihood-free, but can be shown to be equivalent to maximizing likelihood under some conditions. Our holds when the capacity of the model is finite and the number of data examples is finite. The idea behind the method is simple: it finds the nearest sample to each data example and optimizes the model parameters to pull the sample towards it. The direction in which nearest neighbour search is performed is important: the proposed method ensures each data example has a similar sample, which contrasts with an alternative approach of pushing each sample to the nearest data example, which would ensure that each sample has a similar data example. The latter approach would permit all samples being similar to one data example. Such a scenario would be heavily penalized by the former approach. The proposed method could sidestep the three issues mentioned above: mode collapse, vanishing gradients and training instability. Modes are not dropped because the loss ensures each data example has a sample nearby at optimality; gradients do not vanish because the gradient of the distance between a data example and its nearest sample does not become zero unless they coincide; training is stable because the estimator is the solution to a simple minimization problem. By leveraging recent advances in fast nearest neighbour search algorithms BID34, this approach is able to scale to large, high-dimensional datasets.longer the case, due to recent advances in nearest neighbour search algorithms BID34.Note that the use of Euclidean distance is not a major limitation of the proposed approach. A variety of distance metrics are either exactly or approximately equivalent to Euclidean distance in some non-linear embedding space, in which case the theoretical guarantees are inherited from the Euclidean case. This encompasses popular distance metrics used in the literature, like the Euclidean distance between the activations of a neural net, which is often referred to as a perceptual similarity metric (; BID14 . The approach can be easily extended to use these metrics, though because this is the initial paper on this method, we focus on the vanilla setting of Euclidean distance in the natural representation of the data, e.g.: pixels, both for simplicity/clarity and for comparability to vanilla versions of other methods that do not use auxiliary sources of labelled data or leverage domain-specific prior knowledge. For distance metrics that cannot be embedded in Euclidean space, the analysis can be easily adapted with minor modifications as long as the volume of a ball under the metric has a simple dependence on its radius. There has been debate BID29 over whether maximizing likelihood of the data is the appropriate objective for the purposes of learning generative models. Recall that maximizing likelihood is equivalent to minimizing D KL (p data p θ), where p data denotes the empirical data distribution and p θ denotes the model distribution. One proposed alternative is to minimize the reverse KLdivergence, D KL (p θ p data), which is suggested BID29 to be better because it severely penalizes the model for generating an implausible sample, whereas the standard KL-divergence, D KL (p data p θ), severely penalizes the model for assigning low density to a data example. As a , when the model is underspecified, i.e. has less capacity than what's necessary to fit all the modes of the data distribution, minimizing D KL (p θ p data) leads to a narrow model distribution that concentrates around a few modes, whereas minimizing D KL (p data p θ) leads to a broad model distribution that hedges between modes. The success of GANs in generating good samples is often attributed to the former phenomenon.This argument, however, relies on the assumption that we have access to an infinite number of samples from the true data distribution. In practice, however, this assumption rarely holds: if we had access to the true data distribution, then there is usually no need to fit a generative model, since we can simply draw samples from the true data distribution. What happens when we only have the empirical data distribution? Recall that D KL (p q) is defined and finite only if p is absolutely continuous w.r.t. q, i.e.: q(x) = 0 implies p(x) = 0 for all x. In other words, D KL (p q) is defined and finite only if the support of p is contained in the support of q. Now, consider the difference between D KL (p data p θ) and D KL (p θ p data): minimizing the former, which is equivalent to maximizing likelihood, ensures that the support of the model distribution contains all data examples, whereas minimizing the latter ensures that the support of the model distribution is contained in the support of the empirical data distribution, which is just the set of data examples. In other words, maximum likelihood disallows mode dropping, whereas minimizing reverse KL-divergence forces the model to assign zero density to unseen data examples and effectively prohibits generalization. Furthermore, maximum likelihood discourages the model from assigning low density to any data example, since doing so would make the likelihood, which is the product of the densities at each of the data examples, small. From the modelling perspective, because maximum likelihood is guaranteed to preserve all modes, it can make use of all available training data and can therefore be used to train high-capacity models that have a large number of parameters. In contrast, using an objective that permits mode dropping allows the model to pick and choose which data examples it wants to model. As a , if the goal is to train a high-capacity model that can learn the underlying data distribution, we would not be able to do so using such an objective because we have no control over which modes the model chooses to drop. Put another way, we can think about the model's performance along two axes: its ability to generate plausible samples (precision) and its ability to generate all modes of the data distribution (recall). A model that successfully learns the underlying distribution should score high along both axes. If mode dropping is allowed, then an improvement in precision may be achieved at the expense of lower recall and could represent a move to a different point on the same precision-recall curve. As a , since sample quality is an indicator of precision, improvement in sample quality in this setting may not mean an improvement in density estimation performance. On the other hand, if mode dropping is disallowed, since full recall is always guaranteed, an improvement in precision is achieved without sacrificing recall and so implies an upwards shift in the precision-recall curve. In this case, an improvement in sample quality does signify an improvement in density estimation performance, which may explain sample quality historically was an important way to evaluate the performance of generative models, most of which maximized likelihood. With the advent of generative models that permit mode dropping, however, sample quality is no longer a reliable indicator of density estimation performance, since good sample quality can be trivially achieved by dropping all but a few modes. In this setting, sample quality can be misleading, since a model with low recall on a lower precision-recall curve can achieve a better precision than a model with high recall on a higher precision-recall curve. Since it is hard to distinguish whether an improvement in sample quality is due to a move along the same precision-recall curve or a real shift in the curve, an objective that disallows mode dropping is critical tool that researchers can use to develop better models, since they can be sure that an apparent improvement in sample quality is due to a shift in the precision-recall curve.. Let F θ (·) be the cumulative distribution function (CDF) ofr θ and Ψ(z):= min θ E R θ |p θ = z.If P θ satisfies the following:• p θ (x) is differentiable w.r.t. θ and continuous w.r.t. x everywhere.• ∀θ, v, there exists θ such that p θ (x) = p θ (x + v) ∀x.• For any θ 1, θ 2, there exists θ 0 such that DISPLAYFORM0 MNIST TFD DBN BID7 138 ± 2 1909 ± 66 SCAE BID7 121 ± 1.6 2110 ± 50 DGSN 214 ± 1.1 1890 ± 29 GAN BID21 225 ± 2 2057 ± 26 GMMN BID36 147 ± 2 2085 ± 25 IMLE (Proposed) 257 ± 6 2139 ± 27 Table 1: Log-likelihood of the test data under the Gaussian Parzen window density estimated from samples generated by different methods.• DISPLAYFORM0, where B θ * (τ) denotes the ball centred at θ * of radius τ.• Ψ(z) is differentiable everywhere.• DISPLAYFORM1... DISPLAYFORM2 Now, we examine the restrictiveness of each condition. The first condition is satisfied by nearly all analytic distributions. The second condition is satisfied by nearly all distributions that have an unrestricted location parameter, since one can simply shift the location parameter by v. The third condition is satisfied by most distributions that have location and scale parameters, like a Gaussian distribution, since the scale can be made arbitrarily low and the location can be shifted so that the constraint on p θ (·) is satisfied. The fourth condition is satisfied by nearly all distributions, whose density eventually tends to zero as the distance from the optimal parameter setting tends to infinity. The fifth condition requires min θ E R θ |p θ = z to change smoothly as z changes. The final condition requires the two n-dimensional vectors, one of which can be chosen from a set of d vectors, to be not exactly orthogonal. As a , this condition is usually satisfied when d is large, i.e. when the model is richly parameterized. There is one remaining difficulty in applying this theorem, which is that the quantity 1/Ψ (p θ * (x i))p θ * (x i), which appears as an coefficient on each term in the proposed objective, is typically not known. If we consider a new objective that ignores the coefficients, i.e. n i=1 E R θ i, then minimizing this objective is equivalent to minimizing an upper bound on the ideal objective, DISPLAYFORM3 The tightness of this bound depends on the difference between the highest and lowest likelihood assigned to individual data points at the optimum, i.e. the maximum likelihood estimate of the parameters. Such a model should not assign high likelihoods to some points and low likelihoods to others as long as it has reasonable capacity, since doing so would make the overall likelihood, which is the product of the likelihoods of individual data points, low. Therefore, the upper bound is usually reasonably tight. We trained generative models using the proposed method on three standard benchmark datasets, MNIST, the Toronto Faces Dataset (TFD) and CIFAR-10. All models take the form of feedforward neural nets with isotropic Gaussian noise as input. For MNIST, the architecture consists of two fully connected hidden layers with 1200 units each followed by a fully connected output layer with 784 units. ReLU activations were used for hidden layers and sigmoids were used for the output layer. For TFD, the architecture is wider and consists of two fully connected hidden layers with 8000 units each followed by a fully connected output layer with 2304 units. For both MNIST and TFD, the dimensionality of the noise vector is 100. For CIFAR-10, we used a simple convolutional architecture with 1000-dimensional Gaussian noise as input. The architecture consists of five convolutional layers with 512 output channels and a kernel size of 5 that all produce 4 × 4 feature maps, followed by a bilinear upsampling layer that doubles the width and height of the feature maps. There is a batch normalization layer followed by leaky ReLU activations with slope −0.2 after each convolutional layer. This design is then repeated for each subsequent level of resolution, namely 8 × 8, 16 × 16 and 32 × 32, so that we have 20 convolutional layers, each with output 512 channels. We then add a final output layer with three output channels on top, followed by sigmoid activations. We note that this architecture has more capacity than typical architectures used in other methods, like BID44. This is because our method aims to capture all modes of the data distribution and therefore needs more modelling capacity than methods that are permitted to drop modes. sirable properties of the generative model that do not affect performance on the task. Intrinsic evaluation metrics measure performance without relying on external models or data. Popular examples include estimated log-likelihood ) and visual assessment of sample quality. While recent literature has focused more on the latter and less on the former, it should be noted that they evaluate different properties -sample quality reflects precision, i.e.: how accurate the model samples are compared to the ground truth, whereas estimated log-likelihood focuses on recall, i.e.: how much of the diversity in the data distribution the model captures. Consequently, both are important metrics; one is not a replacement for the other. As pointed out by , "qualitative as well as quantitative analyses based on model samples can be misleading about a model's density estimation performance, as well as the probabilistic model's performance in applications other than image synthesis." Two models that achieve different levels of precision may simply be at different points on the same precision-recall curve, and therefore may not be directly comparable. Models that achieve the same level of recall, on the other hand, may be directly compared. So, for methods that maximize likelihood, which are guaranteed to preserve all modes and achieve full recall, both sample quality and estimated log-likelihood capture precision. Because most generative models traditionally maximized likelihood or a lower bound on the likelihood, the only property that differed across models was precision, which may explain why sample quality has historically been seen as an important indicator of performance. However, in heterogenous experimental settings with different models optimized for various objectives, sample quality does not necessarily reflect how well a model learns the underlying data distribution. Therefore, under these settings, both precision and recall need to be measured. While there is not yet a reliable way to measure recall (given the known issues of estimated log-likelihoods in high dimensions), this does not mean that sample quality can be a valid substitute for estimated log-likelihoods, as it cannot detect the lack of diversity of samples. A secondary issue that is more easily solvable is that samples presented in papers are sometimes cherry-picked; as a , they capture the maximum sample quality, but not necessarily the mean sample quality. To mitigate these problems to some extent, we avoid cherry-picking and visualize randomly chosen samples, which are shown in FIG0. We also report the estimated log-likelihood in Table 1. As mentioned above, both evaluation criteria have biases/deficiencies, so performing well on either of these metrics does not necessarily indicate good density estimation performance. However, not performing badly on either metric can provide some comfort that the model is simultaneously able to achieve reasonable precision and recall. As shown in FIG0, despite its simplicity, the proposed method is able to generate reasonably good samples for MNIST, TFD and CIFAR-10. While it is commonly believed that minimizing reverse KL-divergence is necessary to produce good samples and maximizing likelihood necessarily leads to poor samples BID23, the suggest that this is not necessarily the case. Even though Euclidean distance was used in the objective, the samples do not appear to be desaturated or overly blurry. Samples also seem fairly diverse. This is supported by the estimated log-likelihood in Table 1. Because the model achieved a high score on that metric on both MNIST and TFD, this suggests that the model did not suffer from significant mode dropping. In FIG3 in the supplementary material, we show samples and their nearest neighbours in the training set. Each sample is quite different from its nearest neighbour in the training set, suggesting that the model has not overfitted to examples in the training set. Next, we visualize the learned manifold by walking along a geodesic on the manifold between pairs of samples. More concretely, we generate five samples, arrange them in arbitrary order, perform linear interpolation in latent variable space between adjacent pairs of samples, and generate an image from the interpolated latent variable. As shown in FIG2, the images along the path of interpolation appear visually plausible and do not have noisy artifacts. In addition, the transition from one image to the next appears smooth, including for CIFAR-10, which contrasts with findings in the literature that suggest the transition between two natural images tends to be abrupt. This indicates that the support of the model distribution has not collapsed to a set of isolated points and that the proposed method is able to learn the geometry of the data manifold, even though it does not learn a distance metric explicitly. Finally, we illustrate the evolution of samples as training progresses in FIG1. As shown, the samples are initially blurry and become sharper over time. Importantly, sample quality consistently improves over time, which demonstrates the stability of training. While our sample quality may not be state-of-the-art, it is important to remember that these are obtained under the setting of full recall. So, this does not necessarily mean that our method models the underlying data distribution less accurately than other methods that achieve better sample quality, as some of them may drop modes and therefore achieve less than full recall. As previously mentioned, this does not suggest a fundamental tradeoff between precision and recall that cannot be overcome -on the contrary, our method provides researchers with a way of designing models that can improve the precision-recall curve without needing to worry that the observed improvements are due to a movement along the curve. With refinements to the model, it is possible to move the curve upwards and obtain better sample quality at any level of recall as a consequence. This is left for future work; as this is the initial paper on this approach, its value stems from the foundation it lays for a new research direction upon which subsequent work can be built, as opposed to the current themselves. For this paper, we made a deliberate decision to keep the model simple, since non-essential practically motivated enhancements are less grounded in theory, may obfuscate the key underlying idea and could impart the impression that they are critical to making the approach work in practice. The fact that our method is able to generate more plausible samples on CIFAR-10 than other methods at similar stages of development, such as the initial versions of GAN BID21 and PixelRNN (van den), despite the minimal sophistication of our method and architecture, shows the promise of the approach. Later iterations of other methods incorporate additional supervision in the form of pretrained weights and/or make task-specific modifications to the architecture and training procedure, which were critical to achieving state-of-the-art sample quality. We do believe the question of how the architecture should be refined in the context of our method to take advantage of task-specific insights is an important one, and is an area ripe for future exploration. In this section, we consider and address some possible concerns about our method. It has been suggested BID29 that maximizing likelihood leads to poor sample quality because when the model is underspecified, it will try to cover all modes of the empirical data distribution and therefore assign high density to regions with few data examples. There is also empirical evidence BID23 for a negative correlation between sample quality and log likelihood, suggesting an inherent trade-off between maximizing likelihood and achieving good sample quality. A popular solution is to minimize reverse KL-divergence instead, which trades off recall for pre-cision. This is an imperfect solution, as the ultimate goal is to model all the modes and generate high-quality samples. Note that this apparent trade-off exists that the model capacity is assumed to be fixed. We argue that a more promising approach would be to increase the capacity of the model, so that it is less underspecified. As the model capacity increases, avoiding mode dropping becomes more important, because otherwise there will not be enough training data to fit the larger number of parameters to. This is precisely a setting appropriate for maximum likelihood. As a , it is possible that a combination of increasing the model capacity and maximum likelihood training can achieve good precision and recall simultaneously. When the model has infinite capacity, minimizing distance from data examples to their nearest samples will lead to a model distribution that memorizes data examples. The same is true if we maximize likelihood. Likewise, minimizing any divergence measure will lead to memorization of data examples, since the minimum divergence is zero and by definition, this can only happen if the model distribution is the same as the empirical data distribution, whose support is confined to the set of data examples. This implies that whenever we have a finite number of data examples, any method that learns a model with infinite capacity will memorize the data examples and will hence overfit. To get around this, most methods learn a parametric model with finite capacity. In the parametric setting, the minimum divergence is not necessarily zero; the same is true for the minimum distance from data examples to their nearest samples. Therefore, the optimum of these objective functions is not necessarily a model distribution that memorizes data examples, and so overfitting will not necessarily occur. observes that the data distribution and the model distribution are supported on low-dimensional manifolds and so they are unlikely to have a non-negligible intersection. They point out D KL (p data p θ) would be infinite in this case, or equivalently, the likelihood would be zero. While this does not invalidate the theoretical soundness of maximum likelihood, since the maximum of a non-negative function that is zero almost everywhere is still well-defined, it does cause a lot of practical issues for gradient-based learning, as the gradient is zero almost everywhere. This is believed to be one reason that models like variational autoencoders BID32 BID45 use a Gaussian distribution with high variance for the conditional likelihood/observation model rather than a distribution close to the Dirac delta, so that the support of the model distribution is broadened to cover all the data examples.This issue does not affect our method, as our loss function is different from the log-likelihood function, even though their optima are the same (under some conditions). As the , the gradients of our loss function are different from those of log-likelihood. When the supports of the data distribution and the model distribution do not overlap, each data example is likely far away from its nearest sample and so the gradient is large. Moreover, the farther the data examples are from the samples, the larger the gradient gets. Therefore, even when the gradient of log-likelihood can be tractably computed, there may be situations when the proposed method would work better than maximizing likelihood directly. We presented a simple and versatile method for parameter estimation when the form of the likelihood is unknown. The method works by drawing samples from the model, finding the nearest sample to every data example and adjusting the parameters of the model so that it is closer to the data example. We showed that performing this procedure is equivalent to maximizing likelihood under some conditions. The proposed method can capture the full diversity of the data and avoids common issues like mode collapse, vanishing gradients and training instability. The method combined with vanilla model architectures is able to achieve encouraging on MNIST, TFD and CIFAR-10. Before proving the main , we first prove the following intermediate : • There is a bounded set S ⊆ Ω such that bd(S) ⊆ Ω, θ * ∈ S and ∀f i, ∀θ ∈ Ω \ S, f i (θ) > f i (θ *), where bd(S) denotes the boundary of S. DISPLAYFORM0 • DISPLAYFORM1... DISPLAYFORM2 Proof. Let S ⊆ Ω be the bounded set such that bd(S) ⊆ Ω, θ * ∈ S and ∀f i, ∀θ ∈ Ω \ S, f i (θ) > f i (θ *). Consider the closure of S:= S ∪ bd(S), denoted asS. Because S ⊆ Ω and bd(S) ⊆ Ω, S ⊆ Ω. Since S is bounded,S is bounded. BecauseS ⊆ Ω ⊆ R d and is closed and bounded, it is compact. is differentiable on Ω and hence continuous on Ω. By the compactness ofS and the continuity of DISPLAYFORM0 since Φ is strictly increasing. Because Φ (·) > 0, w i > 0 and so DISPLAYFORM1 At the same time, since θ * ∈ S ⊂S, by definition ofθ, DISPLAYFORM2. Combining these two facts yields DISPLAYFORM3 Since the inequality is strict, this implies that θ /∈ Ω \ S, and soθ DISPLAYFORM4 In addition, becauseθ is the minimizer of DISPLAYFORM5 On the other hand, since Φ is differentiable on V and f i (θ) ∈ V for all θ ∈ Ω, Φ (f i (θ)) exists for all θ ∈ Ω. So, DISPLAYFORM6 Combining this with the fact that θ * is the minimizer of DISPLAYFORM7 Because ∀θ ∈ Ω, if θ = θ *, ∃j ∈ Sinceθ is a critical point on Ω, we can conclude that θ * =θ, and so θ * is a minimizer of N i=1 w i Φ(f i (·)) on Ω. Since any other minimizer must be a critical point and θ * is the only critical point, θ * is the unique minimizer. So, arg min θ∈Ω Let > 0 be arbitrary. Since p(·) is continuous at x 0, by definition, ∀˜ > 0 ∃δ > 0 such that ∀u ∈ B x0 (δ), |p(u) − p(x 0)| <˜. Letδ > 0 be such that ∀u ∈ B x0 (δ), p(x 0) − < p(u) < p(x 0) +. We choose δ =δ. Let 0 <h < δ be arbitrary. Since p(x 0)− < p(u) < p(x 0)+ ∀u ∈ B x0 (δ) = B x0 (δ) ⊃ B x0 (h), DISPLAYFORM8 | We develop a new likelihood-free parameter estimation method that is equivalent to maximum likelihood under some conditions | 610 | scitldr |
While most approaches to the problem of Inverse Reinforcement Learning (IRL) focus on estimating a reward function that best explains an expert agent’s policy or demonstrated behavior on a control task, it is often the case that such behavior is more succinctly represented by a simple reward combined with a set of hard constraints. In this setting, the agent is attempting to maximize cumulative rewards subject to these given constraints on their behavior. We reformulate the problem of IRL on Markov Decision Processes (MDPs) such that, given a nominal model of the environment and a nominal reward function, we seek to estimate state, action, and feature constraints in the environment that motivate an agent’s behavior. Our approach is based on the Maximum Entropy IRL framework, which allows us to reason about the likelihood of an expert agent’s demonstrations given our knowledge of an MDP. Using our method, we can infer which constraints can be added to the MDP to most increase the likelihood of observing these demonstrations. We present an algorithm which iteratively infers the Maximum Likelihood Constraint to best explain observed behavior, and we evaluate its efficacy using both simulated behavior and recorded data of humans navigating around an obstacle. Advances in mechanical design and artificial intelligence continue to expand the horizons of robotic applications. In these new domains, it can be difficult to design a specific robot behavior by hand. Even manually specifying a task for a reinforcement-learning-enabled agent is notoriously difficult . Inverse Reinforcement Learning (IRL) techniques can help alleviate this burden by automatically identifying the objectives driving certain behavior. Since first being introduced as Inverse Optimal Control by , much of the work on IRL has focused on learning environmental rewards to represent the task of interest (; ; ;). While these types of IRL algorithms have proven useful in a variety of situations (; ; ;, their basis in assuming that reward functions fully represent task specifications makes them ill suited to problem domains with hard constraints or non-Markovian objectives. Recent work has attempted to address these pitfalls by using demonstrations to learn a rich class of possible specifications that can represent a task . Others have focused specifically on learning constraints, that is, behaviors that are expressly forbidden or infeasible (; Pérez-; ; ;). Such constraints arise in safety-critical systems, where requirements such as an autonomous vehicle avoiding collisions with pedestrians are more naturally expressed as hard constraints than as soft reward penalties. It is towards the problem of inferring such constraints that we turn our attention. In this work, we present a novel method for inferring constraints, drawing primarily from the Maximum Entropy approach to IRL described by. We use this framework to reason about the likelihood of observing a set of demonstrations given a nominal task description, as well as about their likelihood if we imposed additional constraints on the task. This knowledge allows us to select a constraint, or set of constraints, which maximizes the demonstrations' likelihood and best explains the differences between expected and demonstrated behavior. Our method improves on prior work by being able to simultaneously consider constraints on states, actions and features in a Markov Decision Process (MDP) to provide a principled ranking of all options according to their effect on demonstration likelihood. A formulation of the IRL problem was first proposed by as the Inverse problem of Optimal Control (IOC). Given a dynamical system and a control law, the author sought to identify which function(s) the control law was designed to optimize. This problem was brought into the domain of MDPs and Reinforcement Learning (RL) by , who proposed IRL as the task of, given an MDP and a policy (or trajectories sampled according to that policy), find a reward function with respect to which that policy is optimal. One of the chief difficulties in the problem of IRL is the fact that a policy can be optimal with respect to a potentially infinite set of reward functions. The most trivial example of this is the fact that all policies are optimal with respect to a null reward function that always returns zero. Much of the subsequent work in IRL has been devoted to developing approaches that address this ambiguity by imposing additional structure to make the problem well-posed . approach the problem by employing the principle of maximum entropy , which allows the authors to develop an IRL algorithm that produces a single stochastic policy that matches feature counts without adding any additional constraints to the produced behavior. This so called Maximum Entropy IRL (MaxEnt) provides a framework for reasoning about demonstrations from experts who are noisily optimal. The induced probability distribution over trajectories forms the basis for our efforts in identifying the most likely behavior-modifying constraints. While Markovian rewards do often provide a succinct and expressive way to specify the objectives of a task, they cannot capture all possible task specifications. highlight the utility of non-Markovian Boolean specifications which can describe complex objectives (e.g. do this before that) and compose in an intuitive way (e.g. avoid obstacles and reach the goal). The authors of that work draw inspiration from the MaxEnt framework to develop their technique for using demonstrations to calculate the posterior probability that an agent is attempting to satisfy a Boolean specification. A subset of these types of specifications that is of particular interest to us is the specification of constraints, which are states, actions, or features of the environment that must be avoided. explore how to infer trajectory feature constraints given a nominal model of the environment (lacking the full set of constraints) and a set of demonstrated trajectories. The core of their approach is to sample from the set of trajectories which have better performance than the demonstrated trajectories. They then infer that the set of possible constraints is the subset of the feature space that contains the higher-reward sampled trajectories, but not the demonstrated trajectories. Intuitively, they reason that if the demonstrator could have passed through those features to earn a higher reward, but did not, then there must have been a previously unknown constraint preventing that behavior. However, while their approach does allow for a cost function to rank elements from the set of possible constraints, the authors do not offer a mechanism for determining what cost function will best order these constraints. Our approach to constraint inference from demonstrations addresses this open question by providing a principled ranking of the likelihood of constraints. We adapt the MaxEnt framework to allow us to reason about how adding a constraint will affect the likelihood of demonstrated behaviors, and we can then select the constraints which maximize this likelihood. We consider feature-space constraints as in , and we explicitly augment the feature space with state-and action-specific features to directly compare the impacts of state-, action-, and feature-based constraints on demonstration likelihood. Following the formulation presented by , we base our work in the setting of a (finite-state) Markov Decision Process (MDP). We define an MDP M as a tuple (S, {A s}, {P s,a}, D 0, φ, R) where S is a finite set of discrete states; {A s} is a set of the sets of actions available to be taken for each state s, such that A s ⊆ A, where A is a finite set of discrete actions; {P s,a} is a set of state transition probability distributions such that P s,a (s) = P (s |s, a) is the probability of transitioning to state s after taking action a from state s; D 0: S → is an initial state distribution; φ: S × A → R k + is a mapping to a k-dimensional space of non-negative features; and R: S × A → R is a reward function. A trajectory ξ through this MDP is a sequence of states s t and actions a t such that s 0 ∼ D 0 and state s i+1 ∼ P si,ai. Actions are chosen by an agent navigating the MDP according to a, potentially time-varying, policy π such that π(·|s, t) is a probability distribution over actions in A s. We denote a finite-time trajectory of length T + 1 by ξ = {s 0:T, a 0:T}. At every time step t, a trajectory will accumulate features equal to φ(s t, a t). We use the notation φ i (·, ·) to refer to the i-th element of the feature map, and we use the label φ i to denote the i-th feature itself. We also introduce an augmented indicator feature mapping φ 1: S × A → {0, 1} n φ, where n φ = k + |S| + |A|. This augmented feature map uses binary variables to indicate the presence of a feature and expands the feature space by adding binary features to track occurrences of each state and action, such that Typically, agents are modeled as trying to maximize, or approximately maximize, the total reward earned for a trajectory ξ, given by R(ξ) = T t=0 γ t R(s t, a t), where γ ∈ is a discount factor. Therefore, an agent's policy π is closely tied to the form of the MDP's reward function. Conventional IRL focuses on inferring a reward function that explains an agent's policy, revealed through the behavior observed in a set of demonstrated trajectories D. However, our method for constraint inference poses a different challenge: given an MDP M, including a reward function, and a set of demonstrations D, find the most likely set of constraints C * that modify M. We define our notion of constraints in the following section. Constraints are those behaviors that are not disallowed explicitly by the structure of the MDP, but which would be infeasible or prohibited for the underlying system being modeled by the MDP. This sort of discrepancy can occur when a generic or simplified MDP is designed without exact knowledge of specific constraints for the modeled system. For instance, for a generic MDP modeling the behavior of cars, we might want to include states for speeds up to 500km/h and actions for accelerations up to 12m/s 2. However, for a specific car on a specific roadway, the set of states where the vehicle travels above 100km/h may be prohibited because of a speed limit, and the set of actions where the vehicle accelerates above 4m/s 2 may be infeasible because of the physical limitations of the vehicle's engine. Therefore, any MDP trajectory of this specific car system would not contain a state-action pair which violates these legal and physical limits. Figure 1 shows an example of constraints driving behavior. We define a constraint set C i ⊆ S × A as a set of state-action pairs that violate some specification of the modeled system. We consider three general classes of constraints: state constraints, action constraints, and feature constraints. A state constraint set C si = {(s, a) | s = s i } includes all state-action pairs such that the state component is s i. An action constraint set C ai = {(s, a) | a = a i } includes all state-action pairs such that the action component is a i. A feature constraint set C φi = {(s, a) | φ i (s, a) > 0} includes all state-action pairs that produce a non-zero value for feature φ i. If we augment the set of features as described in, it is straightforward to see that state and action constraints become special cases of feature constraints, where It is also evident that we can obtain compound constraints, respecting two or more conditions, by taking the union of constraint sets C i to obtain C = i C i. We need to be able to reason about how adding a constraint to an MDP will influence the behavior of agents navigating that environment. If we impose a constraint on an MDP, then none of the state-action pairs in that constraint set may appear in a trajectory of the constrained MDP. To enforce this condition, we must restrict the actions available in each state so that it is not possible for an agent to produce one of the constrained state-action pairs. For a given constraint C, we can replace the set of available actions A s in every state s with an alternative set A Performing such substitutions for an MDP M will lead to a modified MDP M C such that. The question then arises as to the how we should treat states with empty action sets A C s = ∅. Since an agent arriving in such an empty state would have no valid action to select, any trajectory visiting an empty state must be deemed invalid. Indeed, such empty action sets will be produced for any state For MDPs with deterministic transitions, agents know precisely which state they will arrive in following a certain action. Therefore, any agent respecting constraints will not take an action that leads to an empty state, since doing so will lead to constraint violations. If we consider the set of empty states S empty, then for the purposes of reasoning about an agent's behavior, we can impose an additional constraint set C empty = {(s, a) | ∃ s empty ∈ S empty: P s,a (s empty) = 1}. In this work, we will always implicitly add this constraint set, such that M C will be equivalent to M C∪Cempty, and we recursively add these constraints until reaching a fixed point. For MDPs with stochastic transitions, the semantics of an empty state are less obvious and could lend themselves to multiple interpretations depending on the nature of the system being modeled. We offer a possible treatment in the appendix. The nominal MDPs to which we will add constraints fall into two broad categories, which we denote as generic and baseline. The car MDP described at the beginning of Section 3.2 is an example of a generic nominal model: its state and action spaces are broad enough to encompass a wide range of car models, and we can use this nominal model to infer constraints that specialize the MDP to a specific car and task. For a generic nominal MDP, the reward function may also come from a generic, simplified task, such as "minimize time to the goal" or "minimize energy usage." A baseline nominal MDP is a snapshot of a system from a point in time where it was well characterized. With a baseline nominal MDP, the constraints that we infer will represent changes to the system with respect to this baseline. In this case, the nominal reward function can be learned using existing IRL techniques with demonstrated behavior from the baseline model. We take this approach in our human obstacle avoidance example in Section 4.2: we use demonstrations of humans walking through the empty space to learn a nominal reward, then we can detect the presence of a new obstacle in the space from subsequent demonstrations. Our goal is to find the constraints C * which are most likely to have been added to a nominal MDP M, given a set of demonstrations D from an agent navigating the constrained MDP. Let us define P M to denote probabilities given that we are considering MDP M. Our problem then becomes to select the constraints that maximize P M (C | D). If we assume a uniform prior over possible constraints, then we know from Bayes' Rule that P M (C | D) ∝ P M (D | C). Therefore, in order to find the constraints that maximize P M (C | D), we can solve the equivalent problem of finding which constraints maximize the likelihood of the given demonstrations. In this section, we present our approach to solving maximum likelihood constraint inference via solving demonstration likelihood maximization. Under the maximum entropy model presented by , the probability of a certain finite-length trajectory ξ being executed by an agent traversing a deterministic MDP M is exponentially proportional to the reward earned by that trajectory. where Z is the partition function, 1 M (ξ) indicates if the trajectory is feasible for this MDP, and β ∈ [0, ∞) is a parameter describing how closely an agent adheres to the task of optimizing the reward function (as β → ∞, the agent becomes a perfect optimizer, and as β → 0, the agent's actions become perfectly random). In the sequel, we assume that a given reward function will appropriately capture the role of β, so we omit β from our notation without loss of generality. In the case of finite horizon planning, the partition function will be the sum of the exponentially weighted rewards for all feasible trajectories on MDP M of length no greater than the planning horizon. We denote this set of trajectories by Ξ M. Because adding constraints C modifies the set of feasible trajectories, we express this dependence as Assuming independence among demonstrated trajectories, the probability of observing a set D of N demonstrations is given by the product Our goal is to maximize the demonstration probability given by. Because we take the reward function and demonstrations as given, our only available decision variable in this maximization is the constraint set C which alters the indicator 1 M C and partition function Z(C). where C ⊆ 2 S×A is the hypothesis space of possible constraints. From the form of, it is clear that to solve, we must choose a constraint set that does not invalidate any demonstrated trajectory while simultaneously minimizing the value of Z(C). Consider the set of trajectories that would be made infeasible by augmenting the MDP with constraint C, which we denote as Ξ The value of Z(C) is minimized when we maximize the sum of exponentiated rewards of these infeasible trajectories. Considering the form of the trajectory probability given by, we can see that this sum is proportional to the total probability of observing a trajectory from This insight leads us to the final form of the optimization In order to solve, we must reason about the probability distribution of trajectories on the original MDP M, then find the constraint C such that Ξ − M C contains the most probability mass while not containing any demonstrated trajectories. We highlight here that the fact that the chosen C must not conflict with any demonstration is an important condition: all provided demonstrations must perfectly respect a constraint in order for it to be learned, otherwise a less restrictive set of constraints may be learned instead. Future work will look to relax this requirement in order to learn about constraints that are generally respected by a set of demonstrations, without needing to first isolate just the successful, constraint-respecting demonstrations. While equation is derived for deterministic MDPs, if we can assume, as proposed by , that for a given stochastic MDP, the stochastic outcomes have little effect on an agent's behavior and the partition function, then the solution to will also approximate the optimal constraint selection for that MDP. However, in order to fully address the stochastic case, we would need to reformulate our approach based on maximum causal entropy . We save this extension for future work. In order for the solutions to to be meaningful, we must be careful with our choice of the constraint hypothesis space C. For instance, if we let C = 2 S×A, then the optimal solution will always be to choose the most restrictive C to constrain all state-action pairs not observed in the demonstration set. One approach to avoid this trivial solution is to use domain knowledge of the modeled system to restrict C to a library of plausible or common constraints. construct such a library by using reachability theory to calculate a family of likely unsafe sets. Another approach, which we will explore in this work, is the use of minimal constraint sets for our hypothesis space. These minimal sets constrain a single state, action, or feature, and were introduced in Section 3.2 as C si, C ai, and C φi, respectively. By iteratively selecting minimal constraint sets, it is possible to gradually grow the full estimated constraint set and avoid over-fitting to the demonstrations. Section 3.4 details our approach for selecting the most likely minimal constraint, and Section 3.5 details our approach for iteratively growing the estimated constraint set. As detailed in Section 3.3, the most likely constraint set is the one whose eliminated trajectories Ξ − M C have the highest probability of being demonstrated on the original, unconstrained MDP. Therefore, to find the most likely of the minimal constraints, we must find the expected proportion of trajectories which will contain any state or action, or accrue any feature. By using our augmented indicator feature map from, we can reduce this problem to only examine feature accruals. present their forward-backward algorithm for calculating expected feature counts for an agent following a policy in the maximum entropy setting. This algorithm nearly suffices for our purposes, but it computes the expectation of the total number of times a feature will be accrued (i.e. how often will this feature be observed per trajectory), rather than the expectation of the number of trajectories that will accrue that feature at any point. To address this problem, we present a modified form of the "forward" pass as Algorithm 1. Our algorithm tracks state visitations as well as feature accruals at each state, which allows us to produce the same maximum entropy distribution over trajectories as while not counting additional accruals for trajectories that have already accrued a feature. The input of Algorithm 1 includes the MDP itself, a time horizon, and a time-varying policy. This policy should capture the expected behavior of the demonstrator on the nominal MDP M, and can be computed via the "backward" part of the algorithm from. The output of Algorithm 1, Φ [1,T], is an n φ × T array such that the t-th column Φ t is a vector whose i-th entry is the expected proportion of trajectories to have accrued the i-th feature by time t. In particular, the i-th element of Φ T is equal to P M (Ξ − M C i), which allows us to now directly select the most likely constraint according to. When using minimal constraint sets as the constraint hypothesis space, it is possible that the most likely constraint still does not provide a satisfactory explanation for the demonstrated behavior. In this case, it can be beneficial to combine minimal constraints. Input: MDP M, constraint hypothesis space C, empirical probability distribution is framed as finding the combination of constraint sets that "covers" the most probability mass, then the problem becomes a direct analog for the classic maximum coverage problem. While this problem is known to be NP-hard, there exist a simple greedy algorithm with known suboptimality bounds . We present Algorithm 2 as our approach for adapting this greedy heuristic to solve the problem of constraint inference. At each iteration, we grow our estimated constraint set by augmenting it with the constraint set in our hypothesis space that covers the most currently uncovered probability mass. By analog to the maximum coverage problem, we derive the following bound on the suboptimality of our approach. Theorem 1. Let C nc be the set of all constraints C nc such that C nc = nc i=1 C i for C i ∈ C, and let C * nc be the solution to using C nc as the constraint hypothesis space. It follows, then, that at the end of every iteration i of Algorithm 2, This bound is directly analogous to the suboptimality bound for the greedy solution to the maximum coverage problem proven by . For space, the proof is included in the appendix. Rather than selecting the number of constraints n c to be used ahead of time, we check a condition based on KL divergence to decide if we should continue to add constraints. The quantity D KL P D || P M C * provides a measure of how well the distribution over trajectories induced by our inferred constraints, P M C *, agrees with the empirical probability distribution over trajectories observed in the demonstrations, P D. The threshold parameter d DKL is chosen to avoid over-fitting to the demonstrations, combating the tendency to select additional constraints that may only marginally better align our predictions with the demonstrations. We consider the grid world MDP presented in Figure 2. The environment consists of a 9-by-9 grid of states, and the actions are to move up, down, left, right, or diagonally by one cell. The objective is to move from the starting state in the bottom-left corner (s 0) to the goal state in the bottom-right corner (s G). Every state-action pair produces a distance feature, and the MDP reward is negative distance, which encourages short trajectories. There are additionally two more features, denoted green and blue, which are produced by taking actions from certain states, as shown in Figure 2. The true MDP, from which agents generate trajectories, is shown in Figure 2a, including its constraints. The nominal, more generic MDP shown in Figure 2b is what we take as M for applying the iterative maximum likelihood constraint inference in Algorithm 2, with feature accruals estimated using Algorithm 1. While Figures 2c through 2e show the iteratively estimated constraints, which align with the true constraints, it is interesting to note that not all constraints present in the true MDP are identified. For instance, it is so unlikely that an agent would ever select the up-left diagonal action, that the fact that demonstrated trajectories did not contain that action is unsurprising and does not make that action an estimated constraint. Figure 3 shows how the performance of our approach varies based on the number of available demonstrations and the selection for the threshold d DKL. The false positive rate shown in Figure 3a is the proportion of selected constraints which are not constraints of the true system. We can observe two trends in this data that we would expect. First, lower values of d DKL lead to greater false positive rates since they allow Algorithm 2 to continue iterating and accept constraints that do less to align expectations and demonstrations. Second, having more demonstrations available provides more information and reduces the rate of false positives. Further, Figure 3b shows that more demonstrations also allows the behavior predicted by constraints to better align with the observations. It is interesting to note, however, that with fewer than 10 demonstrations and a very low d DKL, we may produce very low KL divergence, but at the cost of a high false positive rate. This phenomenon highlights the role of selecting d DKL to avoid over-fitting. The threshold d DKL = 0.1 achieves a good balance of producing few false positives with sufficient examples while also producing lower KL divergences, and we used this threshold to produce the in Figures The shaded region represents an obstacle in the human's environment, and the red "X"s represent learned constraints. In our second example, we analyze trajectories from humans as they navigate around an obstacle on the floor. We map these continuous trajectories into trajectories through a grid world where each cell represents a 1ft-by-1ft area on the ground. The human agents are attempting to reach a fixed goal state (s G) from a given initial state (s 0), as shown in Figure 4. We performed MaxEnt IRL on human demonstrations of the task without the obstacle to obtain the nominal distance-based reward function. We restrict ourselves to estimating only state constraints, as we do not supply our algorithm with knowledge of any additional features in the environment and we assume that the humans' motion is unrestrained. Demonstrations were collected from 16 volunteers, and the of performing constraint inference are shown in Figure 4. Our method is able to successfully predict the existence of a central obstacle. While we do not estimate every constrained state, the constraints that we do estimate make all of the obstacle states unlikely to be visited. In order to identify those states as additional constraints, we would have to decrease our d DKL threshold, which could also lead to more spurious constraint selections, such as the three shown in Figure 4. We have presented our novel technique for learning constraints from demonstrations. We improve upon previous work in constraint-learning IRL by providing a principled framework for identifying the most likely constraint(s), and we do so in a way that explicitly makes state, action, and feature constraints all directly comparable to one another. We believe that the numerical presented in Section 4 are promising and highlight the usefulness of our approach. Despite its benefits, one drawback of our approach is that the formulation is based on, which only exactly holds for deterministic MDPs. As mentioned in Section 3.3, we plan to investigate the use of a maximum causal entropy approach to address this issue and fully handle stochastic MDPs. Additionally, the methods presented here require all demonstrations to contain no violations of the constraints we will estimate. We believe that softening this requirement, which would allow reasoning about the likelihood of constraints that are occasionally violated in the demonstration set, may be beneficial in cases where trajectory data is collected without explicit labels of success or failure. Finally, the structure of Algorithm 1, which tracks the expected features accruals of trajectories over time, suggests that we may be able to reason about non-Markovian constraints by using this historical information to our advantage. Overall, we believe that our formulation of maximum likelihood constraint inference for IRL shows promising and presents attractive avenues for further investigation. For MDPs with non-deterministic transitions, the semantics of an empty state are less obvious and could lend themselves to multiple interpretations depending on the nature of the system being modeled. In our context, we use constraints to describe how observed behaviors from demonstrations differ from possible behaviors allowed by the nominal MDP structure. We therefore assume that any demonstrations provided are, by the fact that they were selected to be provided, consistent with the system's constraints, including avoiding empty states. This assumption implies that any stochastic state transitions that would have led to an empty state will not be observed in trajectories from the demonstration set. The omission of these transitions means that, for a given (s, a), if P s,a (S empty) = p, then a proportion p of these (s, a) pairs which occur as an agent navigates the environment will be excluded from demonstrations. Therefore, as we modify the MDP to reason about demonstrated behavior, we need updated transition probabilities which eliminate the probability mass of transitioning to empty states, an event which will never be observed in a demonstration. Such modified probabilities can be given as We must also capture the change to observed state-action pair frequencies by understanding that any observed policy π C will be related to an agent's actual policy π according to π C (a|s, t) = π(a|s, t)(1 − P s,a (S empty)) a ∈A C s π(a |s, t)(1 − P s,a (S empty)). It is important to note that the modifications presented in and for non-deterministic MDPs are not meant to directly reflect the reality of the underlying system (we wouldn't expect the actual transition dynamics to change, for instance), but to reflect the apparent behavior that we would expect to observe in the subset of trajectories that would be selected as demonstrations. We further note that applying these modifications to deterministic MDPs will in the same expected behavior as augmenting the constraint set with C empty. Theorem 2. Let C nc be the set of all constraints C nc such that C nc = nc i=1 C i for C i ∈ C, and let C * nc be the solution to using C nc as the constraint hypothesis space. It follows, then, that at the end of every iteration i of Algorithm 2, Proof. The problem of finding C * nc is analogous to solving the maximum coverage problem, where the set of elements to be covered is the set of trajectories {ξ | ∃ C ∈ C : ξ ∈ Ξ − M C and D ∩ Ξ − M C = ∅} and the weight of each element ξ is P M (ξ). Because Algorithm 2 constructs C * iteratively by taking the union of the previous value of C * and the set C i ∈ C which solves, the value of C * at the end of the i-th iteration is analogous to the greedy solution of the maximum coverage problem with n c = i. Therefore, we can directly apply the suboptimality bound for the greedy solution proven in to arrive at our given bound on eliminated probability mass. | Our method infers constraints on task execution by leveraging the principle of maximum entropy to quantify how demonstrations differ from expected, un-constrained behavior. | 611 | scitldr |
The success of reinforcement learning in the real world has been limited to instrumented laboratory scenarios, often requiring arduous human supervision to enable continuous learning. In this work, we discuss the required elements of a robotic system that can continually and autonomously improve with data collected in the real world, and propose a particular instantiation of such a system. Subsequently, we investigate a number of challenges of learning without instrumentation -- including the lack of episodic resets, state estimation, and hand-engineered rewards -- and propose simple, scalable solutions to these challenges. We demonstrate the efficacy of our proposed system on dexterous robotic manipulation tasks in simulation and the real world, and also provide an insightful analysis and ablation study of the challenges associated with this learning paradigm. Reinforcement learning (RL) can in principle enable real-world autonomous systems, such as robots, to autonomously acquire a large repertoire of skills. Perhaps more importantly, reinforcement learning can enable such systems to continuously improve the proficiency of their skills from experience. However, realizing this promise in reality has proven challenging: even with reinforcement learning methods that can acquire complex behaviors from high-dimensional low-level observations, such as images, the typical assumptions of the reinforcement learning problem setting do not fit perfectly into the constraints of the real world. For this reason, most successful robotic learning experiments have been demonstrated with varying levels of instrumentation, in order to make it practical to define reward functions (e.g. by using auxiliary sensors (a; ;) ), and in order to make it practical to reset the environment between trials (e.g. using manually engineered contraptions). In order to really make it practical for autonomous learning systems to improve continuously through real-world operation, we must lift these constraints and design learning systems whose assumptions match the constraints of the real world, and allow for uninterrupted continuous learning with large amounts of real world experience. What exactly is holding back our reinforcement learning algorithms from being deployed for learning robotic tasks (for instance manipulation) directly in the real world? We hypothesize that our current reinforcement learning algorithms make a number of unrealistic assumptions that make real world deployment challenging -access to low-dimensional Markovian state, known reward functions, and availability of episodic resets. In practice, this means that significant human engineering is required to materialize these assumptions in order to conduct real-world reinforcement learning, which limits the ability of learning-enabled robots to collect large amounts of experience automatically in a variety of naturally occuring environments. Even if we can engineer a complex solution for instrumentation in one environment, the same may need to be done for every environment being learned in. When using deep function approximators, actually collecting large amounts of real world experience is typically crucial for effective generalization. The inability to collect large amounts of real world data autonomously significantly limits the ability of these robots to learn robust, generalizable behaviors. In this work, we propose that overcoming these challenges requires designing robotic systems that possess three fundamental capabilities: they are able to learn from their own raw sensory inputs, they are able to assign rewards to their own behaviors with minimal human intervention, they are able to learn continuously in non-episodic settings without requiring human operators to manually reset the environment. We believe that a system with these capabilities will bring us significantly closer to the goal of continuously improv-ing robotic agents that leverage large amounts of their own real world experience, without requiring significant human instrumentation and engineering effort. Having laid out these requirements, we propose a practical instantiation of such a learning system, which afford the above capabilities. While prior works have studied each of these issues in isolation, combining solutions to these issues is non-trivial and in a particularly challenging learning problem. We provide a detailed empirical analysis of these issues, both in simulation and on a real-world robotic platform, and propose a number of simple but effective solutions that can make it possible to produce a complete robotic learning system that can learn autonomously, handle raw sensory inputs, learn reward functions from easily available supervision, and learn without manually designed reset mechanisms. We show that this system is well suited for learning dexterous robotic manipulation tasks in the real world, and substantially outperforms ablations and prior work. While the individual components that we combine to design our robotic learning system are based heavily on prior work, both the combination of these components and their specific instantiations are novel. Indeed, we show that without the particular design decisions motivated by our experiments, naïve designs that follow prior work generally fail to satisfy one of the three requirements that we lay out. Let us start by considering the standard reinforcement learning paradigm, where we operate in a Markov decision process with state space S, action space A, unknown transition dynamics T, unknown reward function R and an episodic initial state distribution ρ. In RL, the goal is to learn a policy to maximize the expected sum of rewards via environment interactions. There is no special object instrumentation -only an RGB camera and proprioceptive sensors. Although this formalism is simple and concise, it does not capture all of the complexities of real-world robotic learning problems. If a robotic system is to learn continuously and autonomously in the real world, we must ensure that it can learn under all assumptions that are imposed by the real world. The real world does not have instrumentation available to easily provide low dimensional state estimates, rewards or episodic resets. To move from the idealized MDP formulation to the real world, we require a system that has the following properties. Firstly, all of the information necessary for learning must be obtained from the robot's own sensors. This includes all information about the state and necessitates that the policy must be learned from high-dimensional and low-level sensory observations, such as camera images. Secondly, the robot must also obtain the reward signal itself from its own sensor readings. This is exceptionally difficult for all but the simplest tasks, since reward functions that depend on affecting change in the world to specific objects require perceiving those objects explicitly. Thirdly, we must be able to learn in a non-episodic manner, without access to episodic resets. A setup with explicit resets is increasingly impractical due to the requirement for significant human engineering or intervention during learning. While some of the components discussed above can be tackled in isolation by current algorithms, there are unique challenges inherent to assembling these components into a complete learning system for real world robotics, as well as certain challenges associated with individual components. In the subsequent discussion, we outline the elements that are required to build a robotic system that can learn in the real world with minimal instrumentation. These elements present interesting challenges in learning when combined together, which we analyze in Section 3 and address in Section 4. To enable learning without complex state estimation systems or instrumenting every environment the robot operates in, we require our robotic systems to be able to learn from their own raw sensory observations. Typically, these sensory observations are raw camera images from a camera mounted on the robot and proprioceptive sensory inputs such as the joint angles. These observations do not Figure 2: Schematic comparison of current learning systems versus our proposed instrumentation-free system -R3L. While traditional robotic applications of RL in the real world require explicit supervision in the form of resets, rewards and state estimation, R3L gets rid of these requirements and allows us to learn without any explicit system instrumentation, simply by leveraging interaction with the environment. directly provide the poses of the objects in the scene, which is the typical assumption in simulated robotic environments -any such information must be extracted by the learning system. While in principle many RL frameworks can support learning from raw sensory inputs (; ;), it is important to consider the practicalities of this approach. For instance, we can instantiate vision-based RL with policy gradient algorithms such as TRPO and PPO , but these have high sample complexities which make them unsuited for real world robotic learning (a), which is further exacerbated when learning from visual inputs. In our work, we consider adopting the general framework of off-policy actor-critic reinforcement learning, using a version of the soft actor critic (SAC) algorithm described by (b). This algorithm effectively uses off-policy data, and has been shown to learn some tasks directly from visual inputs. However, while SAC is able to learn directly from raw visual input, most instantiations have required instrumentation for providing rewards and episodic resets. As we show in Section 3, when these assumptions are lifted it leads to a number of non-trivial learning challenges which require new techniques to address. Vision-based RL algorithms, such as SAC, rely on a proper reward function being provided to the system, which is typically hand-defined by a user. While this can be simple to provide in simulation, it is significantly harder to implement in uninstrumented real world environments. In the real world, the robot must obtain reward signal itself from its own sensor readings, which can be extremely challenging. A few unappealing options have been suggested to tackle this: engineer complete computer vision systems to detect objects and extract reward signals (;, engineer reward functions that use various task-specific heuristics to obtain rewards from pixels , or instrument every environment . All are highly manual, tedious processes, and a better solution is needed to scale real world robotic learning gracefully. To devise a system that requires minimal human engineering and supervision for providing rewards, we must use algorithms that are able to assign themselves rewards throughout learning with minimal reward engineering. One candidate is for a user to specify intended behavior beforehand through example images of desired goals. The algorithm can then assign itself reward based on a notion of how well it is accomplishing the specified goals during learning, with no additional human supervision. This scheme scales well since it requires minimal human engineering, and goal images are easy to provide upfront. So how exactly might we design such a reward provision system? To do this, we use a data-driven reward specification framework called variational inverse control with events (VICE), introduced by. VICE learns rewards in a task-agnostic way: we provide the algorithm with success examples in the form of images where the task is accomplished, and learn a discriminator that is capable of distinguishing successes from failures. This discriminator can then be used to provide a learning signal to nudge the reinforcement learning agent towards success. This algorithm has been previously considered in the context of learning some tasks from raw sensory observations in the real world by , but we show that it presents unique challenges when used in conjunction with learning without episodic resets. Details and specifics of the algorithms being considered are described in Appendix A and also in . While the components described in Section 2.1 and 2.2 are essential to building continuously learning RL systems in the real world, they have often been implemented with the assumption of episodic learning. Indeed, previous applications of or Haarnoja et al. (2018b) were implemented with explicitly designed reset mechanisms or human operators performing resets between trials. However, natural open-world settings do not provide any such reset mechanism, and in order to enable scalable and autonomous real-world learning, we need systems that do not require an episodic formulation of the learning problem. In principle, algorithms such as SAC do not actually require episodic learning; however, in practice, most instantiations of these algorithms have used explicit resets, even in simulation, and removing resets has ed in failure to solve challenging tasks. While RL algorithms can handle nonepisodic settings without any modifications in principle, they struggle when applied in practice to challenging problems. In our experiments in Section 3, we see that simply applying actor-critic methods to the reset free setting doesn't learn intended behaviors and requires novel insights when combined with visual observations and classifier based rewards. These three components -vision-based RL with actor-critic algorithms, vision-based goal classifier for rewards, and reset-free learning -are the fundamental pieces that we need to build a real world robotic learning system. However, when we actually combine the individual components in Sections 3 and 6, we find that learning effective policies is quite challenging. We provide insight into these challenges in Section 3 and, based on these insights, we propose a number of simple but important changes in Section 4 that enable R3L to learn effectively and autonomously in the real world without human intervention. Figure 3: This is the object repositioning task. The goal is to move the object from any starting configuration to a particular goal position and orientation. The system design outlined in Section 2, in principle, gives us a complete system to perform real world reinforcement learning without instrumentation. However, when ported to robotic problems in the real world, we find this basic design to be largely ineffective. To illustrate this, we present for a simulated robotic manipulation task that requires repositioning a free-floating objects with a three-fingered robotic hand, shown in Fig 3. We use this task for our investigative analysis, and show that the same insights extend to several other tasks (including real world tasks) in Section 6. The goal in this task is to reposition the object to a target pose from any initial pose in the arena. When the system is instantiated with vision-based SAC, rewards from goal images using VICE and run without episodic resets, we see that the algorithm fails to make progress (Fig 4). Although it might appear that this setup fits within the assumptions of all of the components that are used, the complete system is ineffective. Which particular components of this problem make it so difficult? To investigate this issue, we set up experiments that combine the three main ingredients: varying observation type (visual vs. low-dimensional state), reward structure (VICE vs. hand-defined rewards that utilize ground-truth object state) and the ability to reset (episodic resets vs. reset-free, non-episodic learning). We start by considering the training time reward under each combination of factors as shown in for the learned policy to achieve average training performance of less than 0.15 in pose distance (defined in App C.1.3) across 3 seeds. We compare training with true rewards vs. classifier-based rewards, with vs. without external resets, and from state vs. from vision on the object re-positioning task. We observe that learning without resets is more challenging than with resets and is much more challenging when combined with visual inputs. good training time reward with both vision and state while reset-free only obtains good training time reward with low-dimensional state; second, we find that the policy is able to pass the threshold for training time rewards in a non-episodic setting when operating from low-dimensional state, but not when using image observations. This suggests that combining the reset-free learning problem with visual observations makes it significantly more challenging than with low-dimensional state. However, the table in Fig 4 paints an incomplete picture. These numbers are related to the performance of the policies at training time, not how effective the learned policies are when being evaluated. When we consider the test-time performance (Fig 5) of the learned policies under reset free conditions, we obtain a different set of . While learning from low-dimensional state in the reset free setting is able to achieve decent training time reward, the test-time performance of the corresponding learned policies is very poor. This can likely be attributed to the fact that when the agent spends all its time stuck at the goal, it sees very little diversity of data in other parts of the state space, which significantly affects the efficacy of the actual policies being learned. This makes it very challenging to learn policies with completely reset-free schemes, which has prompted prior work to consider schemes such as learning reset-controllers . As we discuss in the following section and in our experiments, these schemes are often insufficient for learning effective policies in real world without any resets. These insights prompt us to propose simple solutions for instrumentation free reinforcement learning in Section 4. To address the challenges identified in Section 3, we present two improvements to the basic system outlined above, which we found to be essential for uninstrumented real-world training: randomized perturbation controllers and unsupervised representation learning. Incorporating these components into the system in Section 2 in a method that can learn in uninstrumented environments, as we will show in Section 6. From our observations in Fig 4, we can see that it is not effective to simply perform reset-free RL using a standard actor critic algorithm. Even the policies trained with full state information do not actually learn how to perform the desired repositioning task, as shown in Fig 5. This is because, once the policy has performed the task once, it does not need to perform it again, and therefore not only fails how to learn to perform the task reliably, but in fact tends to forget how to perform it at all. Prior work has considered addressing this problem by converting the reset-free learning problem into a more standard episodic problem, by learning a "reset controller," which is trained to reset the system to a particular initial state . However, as we will show in our experiments in Section 6, this still in policies that only succeed from a very narrow range of initial states. Indeed, prior reset controller methods all reset to a single initial state . We take a different approach to learning in a reset-free setting. Rather than attributing the problem to the variance of the initial state distribution, we hypothesize that a major problem with reset-free learning is that the support of the distribution of states visited by the policy is extremely narrow, which makes the learning problem challenging and doesn't allow the agent to learn how to perform the desired task from any state it might find itself in. In this view, the goal should not be to reduce the variance of the initial state distribution, but instead to increase it. To this end, we utilize what we call random perturbation controllers: controllers that introduce perturbations intermittently into the system through a policy that is trained to explore the environment. The standard actor π(a|s) is executed for H time-steps, following which we executed the perturbation controller π p (a|s) for H steps, and repeat. The policy π is trained with the VICE-based rewards for reaching the desired goals, while the perturbation controller π p is trained only with an intrinsic motivation objective that encourages visiting under-explored states. In our implementation, we use the random network distillation (RND) objective for training the perturbation controller . This procedure is described in detail in Appendix A, and is evaluated on the tasks we consider in Fig 6. The perturbation controller ensures that the support of the training distribution grows, and as a the policies can learn the desired behavior much more effectively, as shown in Fig 7. The perturbation controller discussed above allows us to learn policies that can succeed at the task from a variety of starting states. However, learning from visual observations still present a challenge. Our experiments in Fig 4 show that learning without resets from low-dimensional state is comparatively easier. We therefore aim to convert the vision-based learning problem into one that more closely resembles state-based learning, by training a variational autoencoder (VAE) and sharing the latent-variable representation across the actor and critic networks. Refer to Appendix B for more details. Note that we use a VAE as an instantiation of representation learning techniques that works well in the domains considered; approaches proposed by , and are also viable substitutes in this regard. While several prior works have also sought to incorporate unsupervised learning into reinforcement learning to make learning from images easier , we note that this becomes especially critical in the vision-based, reset-free setting, as motivated by the experiments in Section 3, which indicate that it is precisely this combination of factors -vision and no resetsthat presents the most difficult learning problem. Therefore, although the particular solution we use in our system has been studied in prior work, it is brought to bare to address a challenge that arises in real-world learning that we believe has not been explored in prior studies. These two improvements -the perturbation controller and joint training with an unsupervised learning loss, -combined with the general system described above, give us a complete practical system for real world reinforcement learning, which we term R3L. The overall method uses soft-actor critic for learning with visual observations and classifier based rewards with VICE, introduces auxiliary reconstruction objectives for unsupervised representation learning, and uses a perturbation controller during training to ensure that the learned policy can accomplish the task from a wide variety of states. Further details on the full system can be found in Appendix A. The primary contribution of this work is to propose a paradigm for continual instrumentation-free real world robotic learning, and a practical instantiations of such a system. Several prior works have applied policies learned with RL to particular tasks in the real world (; ; b). While many of these algorithms simply train in simulation and transfer ing policies to the real world, this paradigm is prone to domain shift and extensive simulation efforts (Sadeghi & Levine; ;). We instead focus on the paradigm of reinforcement learning purely in the real world. While algorithms have shown that we can indeed perform RL in the real world (; ; b;), these have been limited to highly instrumented laboratory settings. They have carefully hand-designed reward assignment schemes , reset mechanisms (; ; . In contrast, we are proposing to learn in environments with significantly less human instrumentation. This introduces a unique set of challenges not considered carefully in prior works. A key component of our system is learning from raw visual inputs. This has proven to be extremely difficult for policy gradient style algorithms (a) due to challenging representation learning problems. This has been made easier in simulated domains by using modified objectives such as auxiliary losses or by using more efficient algorithms (a). We show that reinforcement learning on raw visual input, while possible in standard RL settings, becomes significantly more challenging when considered in conjunction with nonepisodic, reset-free scenarios. Reward function design is crucial for any RL system, and is non-trivial to provide in the real world. Prior works have considered instrumenting the environment with additional sensors to evaluate rewards (; ;, which is a highly manual process, using demonstrations, which require manual effort to collect (; ;), or using interactive supervision from a user . In this work, we leverage the algorithm introduced by to assign rewards based on the likelihood of a goal classifier. While prior work also applied this method to robotic tasks , this was done in a setting where manual resets were provided, while we demonstrate that we can use learned rewards in a fully uninstrumented, reset-free setup. Learning without resets has been considered in prior works , although in different contexts -safe learning and learning compound controllers respectively. provide an algorithm to learn a reset controller with the goal of ensuring safe operation, but makes several assumptions that make it difficult to use in the real world: it assumes access to a ground truth reward function, it assumes access to an oracle function that can detect if an attempted reset by the reset policy was successful or not, and it assumes the ability to perform manual resets if the reset policy fails a certain number of times. In contrast, we propose an algorithm that allows for fully automated reinforcement learning in the real world. We compare to an ablation of our method that uses a reset controller similar to , and show that our method performs substantially better. Our perturbation controller also resembles the adversarial RL setup Pinto et al. (2017b);. However, while these prior methods explicitly aim to train policies that are robust to perturbations Pinto et al. (2017b) or explore effectively , we are concerned with learning without access to resets. While this line of work has connections to developmental robotics and it's subfields such as continual and lifelong learning, the goal of our work is to handle the practicalities of enabling reinforcement learning systems to learn in the real world without instrumentation or interruption, even for a single task setting, without multi-task considerations. The insights we make should also be applicable to improve developmental robotics algorithms. Though these settings don't touch on all aspects of developmental robotics, our work does relate with the subfields of continual learning , intrinsic motivation and sensory-motor development involving proprioceptive manipulation . In our experimental evaluation, we study how well the R3L system, described in Sections 2 and 4, can learn under realistic settings -visual observations, no hand-specified rewards, and no resets. We consider the following hypotheses: 1. Can we use R3L to learn complex robotic manipulation tasks without instrumentation? Does this system learn skills in both simulation and the real world? 2. Do the solutions proposed in Section 4 actually enable R3L to perform tasks without instrumentation that would not have been otherwise possible? We consider the task of dexterous manipulation with a three fingered robotic hand, called the ), on a number of simulated and real world environments. These tasks involve complex coordination of three fingers with 3 DoF each in order to manipulate objects. Prior works that used this robot utilized explicit resets and low-dimensional true state observations, while we consider settings with visual observations, no hand-specified rewards, and no resets. From left to right we depict valve rotation, bead manipulation and free object repositioning in simulation, as well as valve rotation and bead manipulation manipulation in the real world. The tasks in our experiments are shown in Fig 6: manipulating beads on an abacus row, valve rotation, and free object repositioning. These tasks represent a wide class of problems that robots might encounter in the real world. For each task, we consider the problem of reaching the depicted goal configuration: moving the abacus beads to a particular position, rotating the valve to a particular angle, and repositioning the free object to a particular goal position. For each task, policies are evaluated from a wide selection of initial configurations. Additional details about the tasks, and the evaluation procedures, including initial configurations and success metrics are provided in Appendix C. Videos and additional details can be found at https://sites.google.com/ view/realworld-rl/. We compare our entire proposed system implementation (Section 4) with a number of baselines and ablations. Importantly, all methods must operate under the same assumptions: no system instrumentation for state estimation, reward specification, or episodic resets. Firstly, we compare the performance of R3L to a system which uses SAC for vision-based RL from raw pixels, VICE for providing rewards and running reset-free (denoted as "VICE"). This corresponds to the vanilla version of R3L (Section 2), with none of the proposed insights and changes. We then compare with prior reset-free RL algorithms that explicitly learn a reset controller to alternate goals in the state space ("Reset Controller + VAE"). Lastly, we compare algorithm performance with two clear ablations -running R3L without the perturbation controller ("VICE + VAE") and without the unsupervised learning ("R3L w/o VAE"). This highlights the significance of all the components of R3L. From the experimental in Fig 7, it is clear that R3L is able to reach the best performance across tasks, while none of the other methods are able to solve all of the tasks. Different prior methods and ablations fail for different reasons: methods without the reconstruction objective struggle at parsing the high-dimensional input and are unable to solve the harder task; methods without the perturbation controller are ineffective at learning how to reach the goal from novel initialization positions for the more challenging object repositioning tasks, as discussed in Section 4. We note that an explicit reset controller, which can be viewed as a softer version of our perturbation controller with goal-directedness, learns to solve the easier tasks due to the reset controller encouraging exploration of the state space. In our experiments for free object repositioning, performance was reported across 3 choices of reset states. The high variance in evaluation performance indicates that the performance of such a controller (or a goal conditioned variant of it) is highly sensitive to the choice of reset states. A poor choice of reset states, such as two that are very close together, may yield poor exploration leading to performance similar to the single goal VICE baseline. Furthermore, the choice of reset states is highly task dependent and it is often not clear what choice of goals will yield the best performance. On the contrary, our method does not require such taskspecific knowledge and uses random perturbations to reset while training: this allows for a robust, instrumentation-free controller while also ensuring fast convergence. Since the aim of R3L is to enable uninstrumented training in the real world, we next evaluate our method on a real-world robotic system, providing evidence that our insights generalize to the real world without any instrumentation. In the instrumentation-free setting, we leave the robot unattended with a set of goal images provided and the algorithm learns the desired behavior through interaction. The experiments are performed on the DClaw robotic hand with an RGB camera as the only sensory input. Intermediate policies are saved at regular intervals, and evaluations of all policies are performed after training has completed. For valve rotation, we declare an evaluation rollout a success if the final orientation is within within 15 • of the goal. For bead manipulation, we declare success if all the beads are within 2cm of the goal state. Fig 8 compares the performance of our method without supervised learning ("R3L w/o VAE") in the real-world against a baseline that uses SAC for vision-based RL from raw pixels, VICE for providing rewards and running reset-free (denoted as "VICE"). We see that our method learns policies that evaluate from nearly all the initial configurations, whereas VICE fails to evaluate from most initial configurations. Fig 9 depicts sample evaluation rollouts of the policies learned using our method. For further details about real world experiments see Appendix C.2. We presented the design and instantiation of R3L, a system for real world reinforcement learning. We identify and investigate the various ingredients required for such a system to scale gracefully with minimal human engineering and supervision. We show that this system must be able to learn from raw sensory observations, learn from very easily specified reward functions without reward engineering, and learn without any episodic resets. We describe the basic elements that are required to construct such a system, and identify unexpected learning challenges that arise from interplay of these elements. We propose simple and scalable fixes to these challenges through introducing unsupervised representation learning and a randomized perturbation controller. We show the effectiveness on such a system at learning without instrumentation in several simulated and real world environments. The ability to train robots directly in the real world with minimal instrumentation opens a number of exciting avenues for future research. Robots that can learn unattended, without resets or handdesigned reward functions, can in principle collect very large amounts of experience autonomously, which may enable very broad generalization in the future. Furthermore, fully autonomous learning should make it possible for robots to acquire large behavioral repertoires, since each additional task requires only the initial examples needed to learn the reward. However, there are also a number of additional challenges, including sample complexity, optimization and exploration difficulties on more complex tasks, safe operation, communication latency, sensing and actuation noise, and so forth, all of which would need to be addressed in future work in order to enable truly scalable realworld robotic learning. Initialize RND target and predictor networks f (s),f (s) Initialize VICE reward classifier r VICE (s) Initialize replay buffer D Collect initial exploration data and add to D for i = 1 to N do 12: Sample an equal number of goal examples from G and negative examples from D for t = 1 to n VICE do 29: Train the VICE classifier on this batch with binary labels We use a variant of VICE which defines the reward as the logits of the classifier, notably omitting the −log(π(a|s)) term. We also regularize our classifier with mixup . We train all of our experiments using 200 goal images, which takes under an hour to collect in the real world for each task. We found it important to normalize the predictor errors, just as did. We train a standard beta-VAE to maximize the evidence lower bound, given by: To collect training data, we sampled random states in the observation space. In the real world, this sampling can be replaced with training an exploratory policy (i.e. using the RND reward as the policy's only objective). The learned weights of the encoder of the VAE are frozen, and the latent input is used to train the policy for reset-free RL. C TASK DETAILS C.1 SIMULATED TASKS We evaluated our system across three tasks in simulation: bead manipulation, valve rotation, free object repositioning. The bead manipulation task involves an abacus rod with four beads that can slide freely. The goal is to position two beads on each end from any initial configuration of beads. This can take the form of sliding one bead over (if three beads start on one side), two beads over (if all four beads start on one side), splitting beads apart (all four beads start in the middle), or some intermediate combination of those. The true reward is defined as the mean goal distance of all four beads. Policies are evaluated starting from the 8 initial configurations depicted in Fig 10. Evaluation performance reported in Section 6 for this task is defined as the final reward averaged across the 8 evaluation rollouts. The claw is positioned above a three pronged valve (15 cm in diameter). The objective is to turn the valve to a given orientation from any initial orientation. The "true reward" is r = − log(|θ state − θ goal |). Policies are evaluated starting from the 8 initial configurations depicted in Fig 11. Evaluation performance reported in Section 6 for this task is defined as the final orientation distance averaged across the 8 evaluation rollouts. The claw is positioned atop a free (6 DoF) three pronged object (15cm in diameter), which can translate and rotate within a 30cmx30cm box. For each setup we use an RGB camera to get images. We execute actions on the DClaw at 10Hz. In order to operate at such a high frequency while also training from images we sample and train asynchronously, but limit training to not exceed two gradient steps per transition sampled in the real world. Since direct performance metrics cannot be measured during training due to the lack of object instrumentation, evaluations of performance are done post-training. The task is identical to the one in simulation. Evaluations were done post-training. An evaluation trajectory was defined as a success if at the last step, the valve was within 15 degrees of the goal. Each policy was evaluated over 8 rollouts, with initial configurations evenly spaced out between at increments of 45 degrees. Results are reported in Figures 13, 14. The rod is 22cm in length, and each bead measures 3.5cm in diameter. Evaluations were done post-training, using the following procedure: the environment was manually reset to each of the 8 specified configurations shown in Figures 15 and 16 (which cover a full range of the state space) at the start of each evaluation rollout. An evaluation trajectory was defined as a success if at the last time step, all beads were within 2cm of their goal positions. Performance was evaluated at around 20 hours, at which point the policy achieved greater than 80% success on the 10 evaluation rollouts (a random policy achieved a success rate of 10%). Results are reported in Figs 15, 16. Under review as a conference paper at ICLR 2020 Figure 13: These are the of the evaluation rollouts on the valve rotation task in the real world using our method (without the VAE). Trained policies were saved at regular intervals and evaluated post-training. Each row is a different policy, and each column an evaluation rollout from a different initial configuration. The goal is highlighted in yellow. Our method is able to achieve high success rates after 5 hours of training. Under review as a conference paper at ICLR 2020 Figure 15: These are the of the evaluation rollouts on the valve rotation task in the real world using our method (without the VAE). Trained policies were saved at regular intervals and evaluated post-training. Each row is a different policy, and each column an evaluation rollout from a different initial configuration. The goal is highlighted in yellow. Our method is able to achieve high success rates after 17 hours of training. Figure 16: These are the of evaluation rollouts on the valve rotation task in the real world using the VICE single goal baseline. The policies fail to evaluate consistently, except when the initial configuration matches the goal configuration. | System to learn robotic tasks in the real world with reinforcement learning without instrumentation | 612 | scitldr |
We study the problem of cross-lingual voice conversion in non-parallel speech corpora and one-shot learning setting. Most prior work require either parallel speech corpora or enough amount of training data from a target speaker. However, we convert an arbitrary sentences of an arbitrary source speaker to target speaker's given only one target speaker training utterance. To achieve this, we formulate the problem as learning disentangled speaker-specific and context-specific representations and follow the idea of which uses Factorized Hierarchical Variational Autoencoder (FHVAE). After training FHVAE on multi-speaker training data, given arbitrary source and target speakers' utterance, we estimate those latent representations and then reconstruct the desired utterance of converted voice to that of target speaker. We use multi-language speech corpus to learn a universal model that works for all of the languages. We investigate the use of a one-hot language embedding to condition the model on the language of the utterance being queried and show the effectiveness of the approach. We conduct voice conversion experiments with varying size of training utterances and it was able to achieve reasonable performance with even just one training utterance. We also investigate the effect of using or not using the language conditioning. Furthermore, we visualize the embeddings of the different languages and sexes. Finally, in the subjective tests, for one language and cross-lingual voice conversion, our approach achieved moderately better or comparable compared to the baseline in speech quality and similarity. Variational Autoencoder proposed by Hsu et al BID0. Let a dataset D consist of N seq i. DISPLAYFORM0 2. Thus, joint probability with a sequence X i is: DISPLAYFORM1 This is illustrated in FIG0. For inference, we use variational inference to approximate the true 93 posterior and have: DISPLAYFORM2 Since sequence variational lower bound can be decomposed to segment variational lower bound, we 95 can use batches of segment instead of sequence level to maximize: are shown. In all subplots, the female and male embedding cluster locations are clearly separated. DISPLAYFORM3 Furthermore, the plot shows that the speaker embeddings of unique speakers fall near the same 147 location. One phenomenon that we notice is that the speaker embeddings for different languages and 148 gender fall to different locations for VAE-UNC, however, they fall closer to each other in VAE-CND. This might be due to the conditioning on language improving the representation ability of the model. Furthermore, we investigate the phonetic context embedding Z 1 for a sentence for four English test To subjectively evaluate voice conversion performance, we performed two perceptual tests. The To evaluate the speaker similarity of the converted utterances, we conducted a same-different speaker 181 similarity test. In this test, listeners heard two stimuli A and B with different content, and 182 were then asked to indicate whether they thought that A and B were spoken by the same, or by two 183 different speakers, using a five-point scale comprised of +2 (definitely same), +1 (probably same), 0 | We use a Variational Autoencoder to separate style and content, and achieve voice conversion by modifying style embedding and decoding. We investigate using a multi-language speech corpus and investigate its effects. | 613 | scitldr |
We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification. The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of scores for each map. Standard CNN architectures are modified through the incorporation of this module, and trained under the constraint that a convex combination of the intermediate 2D feature vectors, as parametrised by the score matrices, must alone be used for classification. Incentivised to amplify the relevant and suppress the irrelevant or misleading, the scores thus assume the role of attention values. Our experimental observations provide clear evidence to this effect: the learned attention maps neatly highlight the regions of interest while suppressing clutter. Consequently, the proposed function is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets. When binarised, our attention maps outperform other CNN-based attention maps, traditional saliency maps, and top object proposals for weakly supervised segmentation as demonstrated on the Object Discovery dataset. We also demonstrate improved robustness against the fast gradient sign method of adversarial attack. Feed-forward convolutional neural networks (CNNs) have demonstrated impressive on a wide variety of visual tasks, such as image classification, captioning, segmentation, and object detection. However, the visual reasoning which they implement in solving these problems remains largely inscrutable, impeding understanding of their successes and failures alike. One approach to visualising and interpreting the inner workings of CNNs is the attention map: a scalar matrix representing the relative importance of layer activations at different 2D spatial locations with respect to the target task BID21. This notion of a nonuniform spatial distribution of relevant features being used to form a task-specific representation, and the explicit scalar representation of their relative relevance, is what we term'attention'. Previous works have shown that for a classification CNN trained using image-level annotations alone, extracting the attention map provides a straightforward way of determining the location of the object of interest BID2 BID31 and/or its segmentation mask BID21, as well as helping to identify discriminative visual properties across classes BID31. More recently, it has also been shown that training smaller networks to mimic the attention maps of larger and higher-performing network architectures can lead to gains in classification accuracy of those smaller networks BID29.The works of BID21; BID2; BID31 represent one series of increasingly sophisticated techniques for estimating attention maps in classification CNNs. However, these approaches share a crucial limitation: all are implemented as post-hoc additions to fully trained networks. On the other hand, integrated attention mechanisms whose parameters are learned over the course of end-to-end training of the entire network have been proposed, and have shown benefits in various applications that can leverage attention as a cue. These include attribute prediction BID19, machine translation BID1, image captioning BID28 ) and visual question answering (VQA) BID24 BID26. Similarly to these approaches, we here represent attention as a probabilistic map over the input image locations, and implement its estimation via an end-to-end framework. The novelty of our contribution lies in repurposing the global image representation as a query to estimate multi-scale attention in classification, a task which, unlike e.g. image captioning or VQA, does not naturally involve a query. Fig. 1 provides an overview of the proposed method. Henceforth, we will use the terms'local features' and'global features' to refer to features extracted by some layer of the CNN whose effective receptive fields are, respectively, contiguous proper subsets of the image ('local') and the entire image ('global'). By defining a compatibility measure between local and global features, we redesign standard architectures such that they must classify the input image using only a weighted combination of local features, with the weights represented here by the attention map. The network is thus forced to learn a pattern of attention relevant to solving the task at hand. We experiment with applying the proposed attention mechanism to the popular CNN architectures of VGGNet BID20 and ResNet BID11, and capturing coarse-to-fine attention maps at multiple levels. We observe that the proposed mechanism can bootstrap baseline CNN architectures for the task of image classification: for example, adding attention to the VGG model offers an accuracy gain of 7% on CIFAR-100. Our use of attention-weighted representations leads to improved fine-grained recognition and superior generalisation on 6 benchmark datasets for domain-shifted classification. As observed on models trained for fine-grained bird recognition, attention aware models offer limited resistance to adversarial fooling at low and moderate L ∞ -noise norms. The trained attention maps outperform other CNN-derived attention maps BID31, traditional saliency maps BID14 BID30 ), and top object proposals on the task of weakly supervised segmentation of the Object Discovery dataset ). In §5, we present sample which suggest that these improvements may owe to the method's tendency to highlight the object of interest while suppressing clutter. Attention in CNNs is implemented using one of two main schemes -post hoc network analysis or trainable attention mechanisms. The former scheme has been predominantly employed to access network reasoning for the task of visual object recognition BID21 BID31 BID2. BID21 approximate CNNs as linear functions, interpreting the gradient of a class output score with respect to the input image as that class's spatial support in the image domain, i.e. its attention map. Importantly, they are one of the first to successfully demonstrate the use of attention for localising objects of interest using image-level category labels alone. BID31 apply the classifier weights learned for image-level descriptors to patch descriptors, and the ing class scores are used as a proxy for attention. Their improved localisation performance comes at the cost of classification accuracy. BID2 introduce attention in the form of binary nodes between the network layers of BID21. At test time, these attention maps are adapted to a fully trained network in an additional parameter tuning step. Notably, all of the above methods extract attention from fully trained CNN classification models, i.e. via post-processing. Subsequently, as discussed shortly, many methods have explored the per-formance advantages of optimising the weights of the attention unit in tandem with the original network weights. Trainable attention in CNNs falls under two main categories -hard (stochastic) and soft (deterministic). In the former, a hard decision is made regarding the use of an image region, often represented by a low-order parameterisation, for inference (; . The implementation is non-differentiable and relies on a sampling-based technique called REINFORCE for training, which makes optimising these models more difficult. On the other hand, the soft-attention method is probabilistic and thus amenable to training by backpropagation. The method of BID13 lies at the intersection of the above two categories. It uses a parameterised transform to estimate hard attention on the input image deterministically, where the parameters of the image transformation are estimated using differentiable functions. The soft-attention method of BID19 demonstrates improvements over the above by implementing nonuniform non-rigid attention maps which are better suited to natural object shapes seen in real images. It is this direction that we explore in our current work. Trainable soft attention in CNNs has mainly been deployed for query-based tasks BID1 BID19 BID28 ; BID24 BID26 . As is done with the exemplar captions of , the questions of BID24 ; BID26, and the source sentences of BID1, we here map an image to a high-dimensional representation which in turn highlights the relevant parts of the input image to guide the desired inference. We draw a close comparison to the progressive attention approach of Seo et al. FORMULA2 for attribute prediction. However, there are some noteworthy differences. Their method uses a one-hot encoding of category labels to query the image: this is unavailable to us and we hence substitute a learned representation of the global image. In addition, their sequential mechanism refines a single attention map along the length of the network pipeline. This doesn't allow for the expression of a complementary focus on different parts of the image at different scales as leveraged by our method, illustrated for the task of fine-grained recognition in §5.The applications of attention, in addition to facilitating the training task, are varied. The current work covers the following areas: · Domain shift: A traditional approach to handling domain shift in CNNs involves fine-tuning on the new dataset, which may require thousands of images from each target category for successful adaptation. We position ourselves amongst approaches that use attention BID31 BID13 to better handle domain changes, particularly those involving content, occlusion, and object pose variation, by selectively focusing to the objects of interest. · Weakly supervised semantic segmentation: This area investigates image segmentation using minimal annotations in the form of scribbles, bounding boxes, or image-level category labels. Our work uses category labels and is related to the soft-attention approach of BID12 . However, unlike the aformentioned, we do not explicitly train our model for the task of segmentation using any kind of pixel-level annotations. We evaluate the binarised spatial attention maps, learned as a by-product of training for image classification, for their ability to segment objects. · Adversarial robustness: The work by BID9 explores the ease of fooling deep classification networks by adding an imperceptible perturbation to the input image, implemented as an epsilon step in the direction opposite to the predicted class score gradient. The works by and BID8 argue that this vulnerability comes from relying on spurious or non-oracle features for classification. Consequently, BID8 demonstrate increased adversarial robustness by identifying and masking the likely adversarial feature dimensions. We experiment with performing such suppression in the spatial domain. The core goal of this work is to use attention maps to identify and exploit the effective spatial support of the visual information used by CNNs in making their classification decisions. This approach is premised on the hypothesis that there is benefit to identifying salient image regions and amplifying their influence, while likewise suppressing the irrelevant and potentially confusing information in other regions. In particular, we expect that enforcing a more focused and parsimonious use of image information should aid in generalisation over changes in the data distribution, as occurs for instance when training on one set and testing on another. Thus, we propose a trainable attention estimator and illustrate how to integrate it into standard CNN pipelines so as to influence their output as outlined DISPLAYFORM0 Figure 2: Attention introduced at 3 distinct layers of VGG. Lowest level attention maps appear to focus on the surroundings (i.e., the rocky mountain), intermediate level maps on object parts (i.e., harness and climbing equipment) and the highest level maps on the central object.above. The method is based on enforcing a notion of compatibility between local feature vectors extracted at intermediate stages in the CNN pipeline and the global feature vector normally fed to the linear classification layers at the end of the pipeline. We implement attention-aware classification by restricting the classifier to use only a collection of local feature vectors, as chosen and weighted by the compatibility scores, in classifying examples. We will first discuss the modification to the network architecture and the method of training it, given a choice of compatibility function. We will then conclude the method description by presenting alternate choices of the compatibility function. The proposed approach is illustrated in Fig. 2. Denote by DISPLAYFORM0 n } the set of feature vectors extracted at a given convolutional layer s ∈ {1, · · ·, S}. Here, each s i is the vector of output activations at the spatial location i of n total spatial locations in the layer. The global feature vector g has the entire input image as support and is output by the network's series of convolutional and nonlinear layers, having only to pass through the final fully connected layers to produce the original architecture's class score for that input. Assume for now the existence of a compatibility function C which takes two vectors of equal dimension as arguments and outputs a scalar compatibility score: this will be specified in 3.2.The method proceeds by computing, for each of one or more layers s, the set of compatibility scores C(L s, g) = {c DISPLAYFORM1 The normalised compatibility scores A s = {a In the case of a single layer (S = 1), the attention-incorporating global vector g a is computed as described above, then mapped onto a T -dimensional vector which is passed through a softmax layer to obtain class prediction probabilities {p 1,p 2, · · ·p T}, where T is the number of target classes. In the case of multiple layers (S > 1), we compare two options: concatenating the global vectors into a single vector DISPLAYFORM2 and using this as the input to the linear classification step as above, or, using S different linear classifiers and averaging the output class probabilities. All free network parameters are learned in end-to-end training under a cross-entropy loss function. The compatibility score function C can be defined in various ways. The alignment model from BID1; can be re-purposed as a compatibility function as follows: DISPLAYFORM0 Given the existing free parameters between the local and the global image descriptors in a CNN pipeline, we can simplify the concatenation of the two descriptors to an addition operation, without loss of generality. This allows us to limit the parameters of the attention unit. We then learn a single fully connected mapping from the ant descriptor to the compatibility scores. Here, the weight vector u can be interpreted as learning the universal set of features relevant to the object categories in the dataset. In that sense, the weights may be seen as learning the general concept of objectness. Alternatively, we can use the dot product between g and s i as a measure of their compatibility: DISPLAYFORM1 In this case, the relative magnitude of the scores would depend on the alignment between g and s iin the high-dimensional feature space and the strength of activation of s i. In a standard CNN architecture, a global image descriptor g is derived from the input image and passed through a fully connected layer to obtain class prediction probabilities. The network must express g via mapping of the input into a high-dimensional space in which salient higher-order visual concepts are represented by different dimensions, so as to render the classes linearly separable from one another. Our method encourages the filters earlier in the CNN pipeline to learn similar mappings, compatible with the one that produces g in the original architecture. This is achieved by allowing a local descriptor i of an image patch to contribute to the final classification step only in proportion to its compatibility with g as detailed above. That is, C(ˆ i, g) should be high if and only if the corresponding patch contains parts of the dominant image category. Note that this implies that the effective filters operating over image patches in the layers s must represent relatively'mature' features with respect to the classification goal. We thus expect to see the greatest benefit in deploying attention relatively late in the pipeline. Further, different kinds of class details are more easily accessible at different scales. Thus, in order to facilitate the learning of diverse and complementary attention-weighted features, we propose the use of attention over different spatial resolutions. The combination of the two factors stated above in our deploying the attention units after the convolutional blocks that are late in the pipeline, but before their corresponding maxpooling operations, i.e. before a drop in the spatial resolution. Note also that the use of the softmax function in normalising the compatibility scores enforces 0 ≤ a i ≤ 1 ∀i ∈ {1 · · · n} and i a i = 1, i.e. that the combination of feature vectors is convex. This ensures that features at different spatial locations must effectively compete against one another for their share of the attention map. The compatibility scores thus serve as a robust proxy for attention in the classification pipeline. To incorporate attention into the VGG network, we move each of the first 2 max-pooling layers of the baseline architecture after each of the 2 corresponding additional convolutional layers that we introduce at the end of the pipeline. By pushing the pooling operations further down the pipeline, we ensure that the local layers used for estimating attention have a higher resolution. Our modified model has 17 layers: 15 convolutional and 2 fully connected. The output activations of layer-16 (fc) define our global feature vector g. We use the local feature maps from layers 7, 10, and 13 (convolutional) for estimating attention. We compare our approach with the activation-based attention method of BID31, and the progressive attention mechanism of BID19. For RNs BID11, we use a 164-layered network. We replace the spatial average-pooling step after the computational block-4 with extra convolutional and max-pooling steps to obtain the global feature g. The outputs of blocks 2, 3, and 4 serve as the local descriptors for attention. For more details about network architectures refer to §A.2. Note that, for both of the above architectures, if the dimensionality of g and the local features of a layer s differ, we project g to the lower-dimensional space of the local features, instead of the reverse. This is done in order to limit the parameters at the classification stage. The global vector g, once mapped to a given dimensionality, is then shared by the local features from different layers s as long as they are of that dimensionality. We refer to the network Net with attention at the last level as Net-att, at the last two levels as Net-att2, and at the last three levels as Net-att3. We denote by dp the use of the dot product for matching the global and local descriptors and by pc the use of parametrised compatibility. We denote by concat the concatenation of descriptors from different levels for the final linear classification step. We use indep to denote the alternative of independent prediction of probabilities at different levels using separate linear classifiers: these probabilities are averaged to obtain a single score per class. We evaluate the benefits of incorporating attention into CNNs for the primary tasks of image classification and fine-grained object category recognition. We also examine robustness to the kind of adversarial attack discussed by BID9. Finally, we study the quality of attention maps as segmentations of image objects belonging to the network-predicted categories. For details of the datasets used and their pre-processing routines refer to §A.1, for network training schedules refer to §A.3, and for task-specific experimental setups refer to §A.4. Layer-13 att. Layer-10 att. Layer-13 att. Figure 3: Attention maps from VGG-att2 trained on lowres CIFAR-10 dataset focus sharply on the objects in high-res ImageNet images of CIFAR categories; contrasted here with the activation-based attention maps of BID29. Layer-10 att. Layer-13 att. Figure 4: VGG-att2 trained on CUB-200 for fine-grained bird recognition task: layer-10 learns to fixate on eye and beak regions, layer-13 on plumage and feet. Within the standard VGG architecture, the proposed attention mechanism provides noticeable performance improvement over baseline models (e.g. VGG, RN) and existing attention mechanisms (e.g. GAP, PAN) for visual recognition tasks, as seen in TAB2 & 2. Specifically, the VGG-att2-concat-pc model achieves a 2.5% and 7.4% improvement over baseline VGG for CIFAR-10 and CIFAR-100 classification, and 7.8% and 0.5% improvement for fine-grained recognition of CUB and SVHN categories. As is evident from Fig. 3, the attention mechanism enables the network to focus on the object of interest while suppressing the regions. For the task of fine-grained recognition, different layers learn specialised focus on different object parts as seen in Fig. 4. Note that the RN-34 model for CUB from TAB3 is pre-trained on ImageNet. In comparison, our networks are pre-trained using the much smaller and less diverse CIFAR-100. In spite of the low training cost, our networks are on par with the former in terms of accuracy. Importantly, despite the increase in the total network parameters due to the attention units, the proposed networks generalise exceedingly well to the test set. We are unable to compare directly with the CUB of BID13 due to a difference in dataset pre-processing. However, we improve over PAN BID19 ) by 4.5%, which has itself been shown to outperform the former at a similar task. For the remaining experiments, concat-pc is our implicit attention design unless specified otherwise. When the same attention mechanism is introduced into RNs we observe a marginal drop in performance: 0.9% on CIFAR-10 and 1.5% on CIFAR-100. It is possible that the skip-connections in RNs work in a manner similar to the proposed attention mechanism, i.e. by allowing select local features from earlier layers to skip through and influence inference. While this might make the performance improvement due to attention redundant, our method, unlike the skip-connections, is able to provide explicit attention maps that can be used for auxiliary tasks such as weakly supervised segmentation. Finally, the global feature vector is used as a query in our attention calculations. By changing the query vector, one could expect to affect the predicted attention pattern. A brief investigation of the extent to which the two compatibility functions allow for such post-hoc control is provided in §A.5. Figure 6: Network fooling rate measured as a percentage change in the predicted class labels w.r.t those predicted for the unperturbed images. From Fig. 6, the fooling rate of attention-aware VGG is 5% less than the baseline VGG at an L ∞ noise norm of 1. As the noise norm increases, the fooling rate saturates for the two networks and the performance gap gradually decreases. Interestingly, when the noise begins to be perceptible (see Fig. 5, col. 5), the fooling rate of VGG-att2 is around a percentage higher than that of VGG. Table 3: Cross-domain classification: Top-1 accuracies using models trained on CIFAR-10/100. CIFAR images cover a wider variety of natural object categories compared to those of SVHN and CUB. Hence, we use these to train different network architectures and use the networks as offthe-shelf feature extractors to evaluate their generalisability to new unseen datasets. From Table 3, attention-aware models consistently improve over the baseline models, with an average margin of 6%. We make two additional observations. Firstly, low-resolution CIFAR images contain useful visual properties that are transferrable to high-resolution images such as the 600 × 600 images of the Event-8 dataset . Secondly, training for diversity is better. CIFAR-10 and CIFAR-100 datasets contain the same corpus of images, only organised differently into 10 and 100 categories respectively. From the in Table 3, and the attention maps of Fig. 7, it appears that while learning to distinguish a larger set of categories the network is able to highlight more Layer-7 VGG-att3 (CIFAR10)Layer-10 VGG-att3 (CIFAR10)Layer-13 VGG-att3 (CIFAR10)Layer-7 VGG-att3 (CIFAR100)Layer-10 VGG-att3 (CIFAR100)Layer-13 VGG-att3 (CIFAR100)Level-3 RN-att2 (CIFAR100)Level-4 RN-att2 (CIFAR100)Figure 7: Row 1: Event-8 (croquet), Row 2: Scene-67 (bar). Attention maps for models trained on CIFAR-100 (c100) are more diverse than those from the models trained on CIFAR-10 (c10). Note the sharp attention maps in col. 7 versus the uniform ones in col. 4. Attention maps at lower levels appear to attend to part details (e.g. the stack of wine bottles in the bar (row 2)) and at a higher level on whole objects owing to a large effective receptive field. BID30 51.84 46.61 39.52 -Top object proposal-based -MCG BID0 32.02 54.21 37.85 -Joint segmentation-basedJoulin et al. 15.36 37.15 30.16 Object-discovery 55.81 64.42 51.65 Chen et al. BID3 54.62 69.20 44.46 Jain et al. BID6 58.65 66.47 53.57Figure 9: Jaccard scores (higher is better) for binarised attention maps from CIFAR-10/100 trained models tested on the Object Discovery dataset. From Table 9, the proposed attention maps perform significantly better at weakly supervised segmentation than those obtained using the existing attention methods BID31 BID19 and compare favourably to the top object proposal method, outperforming for all three categories by a minimum margin of 11% and 3% respectively. We do not compare with the CNN-based object proposal methods as they are trained using additional bounding box annotations. We surpass the saliency-based methods in the car category, but perform less well for the other two categories of airplane and horse. This could be due to the detailed structure and smaller size of objects of the latter two categories, see FIG3. Finally, we perform single-image inference and yet compare well to the joint inference methods using a group of test images for segmenting the common object category. We propose a trainable attention module for generating probabilistic landscapes that highlight where and in what proportion a network attends to different regions of the input image for the task of classification. We demonstrate that the method, when deployed at multiple levels within a network, affords significant performance gains in classification of seen and unseen categories by focusing on the object of interest. We also show that the attention landscapes can facilitate weakly supervised segmentation of the predominant object. Further, the proposed attention scheme is amenable to popular post-processing techniques such as conditional random fields for refining the segmentation masks, and has shown promise in learning robustness to certain kinds of adversarial attacks. We evaluate the proposed attention models on CIFAR-10 , CIFAR-100 (, SVHN) and CUB-200-2011 for the task of classification. We use the attention-incorporating VGG model trained on CUB-200-2011 for investigating robustness to adversarial attacks. For cross-domain classification, we test on 6 standard benchmarks including STL BID31 ), Caltech-256 (and Action-40 . We use the Object Discovery dataset for evaluating weakly supervised segmentation performance. A detailed summary of these datasets can be found in Table 4 .For all datasets except SVHN and CUB, we perform mean and standard deviation normalisation as well as colour normalisation. For CUB-200-2011, the images are cropped using ground-truth bounding box annotations and resized. For cross-domain image classification we downsample the input images to avoid the memory overhead. Dataset Size (total/train/test/extra) Number of classes / Type Resolution Tasks Table 4: Summary of datasets used for experiments across different tasks (C: classification, C-c: classification cross-domain, S: segmentation).The natural images span from those of objects in plain to cluttered indoor and outdoor scenes. The objects vary from simple digits to humans involved in complex activities. Progressive attention networks: We experiment with the progressive attention mechanism proposed by BID19 as part of our 2-level attention-based VGG models. The attention at the lower level (layer-10) is implemented using a 2D map of compatibility scores, obtained using the parameterised compatibility function discussed in §3.2. Note that at this level, the compatibility scores are not jointly normalised using the softmax operation, but are normalised independently using the pointwise sigmoid function. These scores, at each spatial location, are used to weigh the corresponding local features before the feature vectors are fed to the next network layer. This is the filtering operation at the core of the progressive attention scheme proposed by BID19. For the final level, attention is implemented in the same way as in VGG-att2-concat-pc. The compatibility scores are normalised using a softmax operation and the local features, added in proportion to the normalised attention scores, are trained for image classification. We implement and evaluate the above-discussed progressive attention approach as well as the proposed attention mechanism with the VGG architecture using the codebase provided here: https://github.com/szagoruyko/cifar.torch. The code for CIFAR dataset normalisation is included in the repository. For the ResNet architecture, we make the attention-related modifications to the network specification provided here: https://github.com/szagoruyko/wide-residualnetworks/tree/fp16/models. The baseline ResNet implementation consists of 4 distinct levels that project the RGB input onto a 256-dimensional space through 16-, 64-, and 128-dimensional embedding spaces respectively. Each level excepting the first, which contains 2 convolutional layers separated by a non-linearity, contains n-residual blocks. Each residual block in turn contains a maximum of 3 convolutional layers interleaved with non-linearities. This yields a network definition of 9n + 2 parameterised layers BID11. We work with an n of 18 for a 164-layered network. Batch normalization is incorporated in a manner similar to other contemporary networks. We replace the spatial average pooling layer after the final and 4 th level by convolutional and maxpooling operations which gives us our global feature vector g. We refer to the network implementing attention at the output of the last level as RN-att and with attention at the output of last two levels as RN-att2. Following the obtained on the VGG network, we train the attention units in the concat-pc framework. A.3 TRAINING ROUTINES VGG networks for CIFAR-10, CIFAR-100 and SVHN are trained from scratch. We use a stochastic gradient descent (SGD) optimiser with a batch size of 128, learning rate decay of 10 −7, weight decay of 5 × 10 −4, and momentum of 0.9. The initial learning rate for CIFAR experiments is 1 and for SVHN is 0.1. The learning rate is scaled by 0.5 every 25 epochs and we train over 300 epochs for convergence. For CUB, since the training data is limited, we initialise the model with the weights learned for CIFAR-100. We use the transfer-learning training schedule inspired by BID17. Thus the training starts at a learning rate of 0.1 for first 30 epochs, is multiplied by 2 twice over the next 60 epochs, and then scaled by 0.5 every 30 epochs for the next 200 epochs. For ResNet, the networks are trained using an SGD optimizer with a batch size of 64, initial learning rate of 0.1, weight decay of 5 × 10 −4, and a momentum of 0.9. The learning rate is multiplied by 0.2 after 60, 120 and 160 epochs. The network is trained for 200 epochs until convergence. We train the models for CIFAR-10 and CIFAR-100 from scratch. All models are implemented in Torch and trained with an NVIDIA Titan-X GPU. Training takes around one to two days depending on the model and datasets. We generate the adversarial images using the fast gradient sign method of BID9 and observe the network fooling behaviour at increasing L ∞ norms of the perturbations. For cross-domain classification, we extract features at the input of the final fully connected layer of each model, use these to train a linear SVM with C = 1 and report the of a 5-fold cross validation, similar to the setup used by BID31. At no point do we finetune the networks on the target datasets. We perform ZCA whitening on all the evaluation datasets using the pre-processing Python scripts specified in the following for whitening CIFAR datasets: https://github.com/szagoruyko/wide-residual-networks/tree/fp16.For weakly supervised segmentation, the evaluation datasets are preprocessed for colour normalisation using the same scheme as adopted for normalising the training datasets of the respective models. For the proposed attention mechanism, we combine the attention maps from the last 2 levels using element-wise multiplication, take the square root of the to re-interpret it as a probability distribution (in the limit of the two probabilistic attention distributions approaching each other), rescale the values in the ant map to a range of and binarise the map using the Otsu binarization threshold. For the progressive attention mechanism of Seo et al. FORMULA2, we simply multiply the attention maps from the two different levels without taking their square root, given that these attention maps are not complementary but sequential maps used to fully develop the final attention distribution. The rest of the operations of magnitude rescaling and binarisation are commonly applied to all the final attention maps, including those obtained using GAP BID31. In our framework, the global feature vector is used as a query for estimating attention on the local image regions. Thus, by changing the global feature vector, one could expect to affect the attention distribution estimated over the local regions in a predictable manner. The extent to which the two different compatibility functions, the parameterised and the dot-product-based compatibility functions, allow for such external control over the estimated attention patterns may be varied. For the purpose of the analysis, we consider two different network architectures. The first is the (VGG-att2)-concat-dp (DP) model from TAB2 which uses a dot-product-based compatibility function. The other is the (VGG-att3)-concat-pc (PC) model which makes use of the parameterised compatibility function. In terms of the dataset, we make use of the extra cosegmentation image set available with the Object Discovery dataset package. We select a single image centrally focused on an instance of a given object category and call this the query image. We then gather a few distinct images that contain objects of the same category but in a more cluttered environment with pose and intra-class variations. We call these the target images. In order to visualise the role of the global feature vector in driving the estimated attention patterns, we perform two rounds of experiments. In the first round, we obtain both the global and local image feature vectors from a given target image, shown in column 2 of every row of FIG0. The processing follows the standard protocol and the ing attention patterns at layer 10 for the two architectures can be seen in columns 3 and 6 of the same figure. In the second round, we obtain the local feature vectors from the target image but the global feature vector is obtained by processing the query image specific to the category being considered, shown in column 1. The new attention patterns are displayed in columns 4 and 7 respectively. The changes in the attention values at different spatial locations as a proportion of the original attention pattern values are shown in columns 5 and 8 respectively. Notably, for the dot-product-based attention mechanism, the global vector plays a prominent role in guiding attention. This is visible in the increase in the attention magnitudes at the spatial locations near or related to the query image object. On the other hand, for the attention mechanism that makes use of the parameterised compatibility function, the global feature vector seems to be redundant. Any change in the global feature vector does not transfer to the ing attention map. In fact, numerical observations show that the magnitudes of the global features are often a couple of orders of magnitude smaller than those of the corresponding local features. Thus, a change in the global feature vector has little to no impact on the predicted attention scores. Yet, the attention maps themselves are able to consistently highlight object-relevant image regions. Thus, it appears that in the case of parameterised compatibility based attention, the object-centric high-order features are learned as part of the weight vector u. These features are adapted to the training dataset and are able to generalise to new images inasmuch as the object categories at test time are similar to those seen during training. | The paper proposes a method for forcing CNNs to leverage spatial attention in learning more object-centric representations that perform better in various respects. | 614 | scitldr |
Recurrent neural network(RNN) is an effective neural network in solving very complex supervised and unsupervised tasks. There has been a significant improvement in RNN field such as natural language processing, speech processing, computer vision and other multiple domains. This paper deals with RNN application on different use cases like Incident Detection, Fraud Detection, and Android Malware Classification. The best performing neural network architecture is chosen by conducting different chain of experiments for different network parameters and structures. The network is run up to 1000 epochs with learning rate set in the range of 0.01 to 0.5.Obviously, RNN performed very well when compared to classical machine learning algorithms. This is mainly possible because RNNs implicitly extracts the underlying features and also identifies the characteristics of the data. This lead to better accuracy. In today's data world, malware is the common threat to everyone from big organizations to common people and we need to safeguard our systems, computer networks, and valuable data. Cyber-crimes has risen to the peak and many hacks, data stealing, and many more cyber-attacks. Hackers gain access through any loopholes and steal all valuable data, passwords and other useful information. Mainly in android platform malicious attacks increased due to increase in large number of application. In other hand its very easy for persons to develop multiple malicious malwares and feed it into android market very easily using a third party software's. Attacks can be through any means like e-mails, exe files, software, etc. Criminals make use of security vulnerabilities and exploit their opponents. This forces the importance of an effective system to handle the fraudulent activities. But today's sophisticated attacking algorithms avoid being detected by the security Email address: [email protected] (Mohammed Harun Babu R) mechanisms. Every day the attackers develop new exploitation techniques and escape from Anti-virus and Malware softwares. Thus nowadays security solution companies are moving towards deep learning and machine learning techniques where the algorithm learns the underlying information from the large collection of security data itself and makes predictions on new data. This, in turn, motivates the hackers to develop new methods to escape from the detection mechanisms. Malware attack remains one of the major security threat in cyberspace. It is an unwanted program which makes the system behave differently than it is supposed to behave. The solutions provided by antivirus software against this malware can only be used as a primary weapon of resistance because they fail to detect the new and upcoming malware created using polymorphic, metamorphic, domain flux and IP flux. The machine learning algorithms were employed which solves complex security threats in more than three decades BID0. These methods have the capability to detect new malwares. Research is going at a high phase for security problems like Intrusion Detection Systems(IDS), Mal-ware Detection, Information Leakage, etc. Fortunately, today's Deep Learning(DL) approaches have performed well in various long-standing AI challenges BID1 such as nlp, computer vision, speech recognition. Recently, the application of deep learning techniques have been applied for various use cases of cyber security BID2.It has the ability to detect the cyber attacks by learning the complex underlying structure, hidden sequential relationships and hierarchical feature representations from a huge set of security data. In this paper, we are evaluating the efficiency of SVM and RNN machine learning algorithms for cybersecurity problems. Cybersecurity provides a set of actions to safeguard computer networks, systems, and data. This paper is arranged accordingly where related work are discussed in section 2 the knowledge of recurrent neural network (RNN) in section 3.In section 4 proposed methodology including description,data set are discussed and at last are furnished in Section 5. Section 6 is conclude with . In this section related work for cybersecurity use cases is discussed: Android Malware Classification (T1), Incident Detection (T2), and Fraud Detection (T3). The most commonly used approach for Malware detection in Android devices is the static and dynamic approach BID3. In the static approach, all the android permissions are collected by unpacking the application and whereas, in dynamic approach, the run-time execution attributes like system calls, network connections, electricity, user interactions and efficient utilization of memory. Most of the commercial systems used today use both the static and dynamic approach. For low computational cost, resource utilization, time resource Static analysis is mainly preferred for Android devices. Meanwhile dynamic analysis has the advantage to detect metamorphic and polymorphic malware. BID4 have evaluated the performance of traditional ML algorithms for malware detection on Android devices without using the API calls and permission as features. MalDozer proposed the use of API calls with deep learning approach to detect the Android malware and classify them accordingly BID5. BID6 API calls contains schematic information which helps in understand the intention of the app indirectly without any user interface. Using embedding techniques at training phase API calls are extracted using DEX assembly BID5 which helps in effective malware detection on neural networks. The security issues in cloud computing are briefly discussed in BID7. BID8 proposed ML-based anomaly detection that acts on the network, service and work-flow layers. A hybrid of both machine learning and rulebased systems are combined for intrusion detection in the cloud infrastructure BID9. BID10 shows how Incident Detection can perform well than intrusion detection. In BID11 discusses a detailed study on 6 different traditional ML classifiers in finding the credit card frauds, financial frauds. Credit card frauds are detected using Convolution Neural Networks. Fraud Detection in crowd sourcing projects is discussed in BID12.Statistical Fraud Detection method model is trained to discriminate the fraudulent and non fraudulent using supervised and unsupervised methods in credit card frauds. BID6 Especially in communication networks Fraud Detection are rectified using supervised learning by statistical learning of behaviour of networks us using Bayesian network approach. Data mining approaches related to financial Fraud Detection are discussed in BID13. BID14 mainly discusses the Fraud Detection in today's new Online e-commerce transaction using Recurrent Neural Network(RNN) which performed very well. Based on this a detailed survey is conducted in BID15. The risks and trust involved in e-commerce market are detailed studied in BID16. The first task is an android classification task. The dataset is created from a set of APK packages files collected from the Opera Mobile Store from Jan to Sep 2014 is used. This dataset consists of API(Application Programming Interface) information for 61,730 APK files where 30,897 files for training and 30,833 files for testing BID17. The second task is incident detection. This dataset contains operational log file that was captured from Unified Threat Management (UTM) of UniteCloud. Task 3 is Fraud Detection. This dataset is anonymised data that was unified using the highly correlated rule based uniformly distributed synthetic data (HCRUD) approach by considering similar distribution of features. To find an optimal , three trails of experiment with 700 epochs has run with learning rate varying in the range [0.01-0.5]. The highest 10-fold cross-validation accuracy was obtained by using the learning rate of 0.01. There was a sudden decrease in accuracy at learning rate 0.05 and finally attained highest accuracy at learning rates of 0.035, 0.045 and 0.05 in comparison to learning rate 0.01. This accuracy may have been enhanced by running the experiments till 1000 epochs. As more complex architectures we have experimented with, showed less performance within 500 epochs, so 0.01 as learning rate for the rest of the experiments by taking training time and computational cost into account. The RNN 1 to 6 layer network topology are used in order to find an optimum network structure for our input data since we don't know the optimal number of layers and neurons. We run 3 trails of experiments for each RNN network toplogy. Each trail of the experiment was run till 700 epochs. It was observed that most of the deep learning architectures learn the normal category patterns of input data within 400 epochs itself. The number of epochs required to learn the malicious category data usually varies. This complex architecture networks required a large number of iterations in order to reach the best accuracy. At last, we obtained the bestperformed network topology for each use case. For Task Two and Task Three, 3 layer RNN network performed well. For Task One, the 6 layer RNN network gave a good performance in comparison to the 4 layer RNN. Then we decided to use 6 layer RNN network for the rest of the experiments. 10-fold cross-validation accuracy of each RNN network topology for all use cases is shown in TAB2. An intuitive overview of our proposed RNN architecture for all use cases is shown in FIG0 This consists of the input layer with six hidden layers and an output layer. An input layer contains 4896 neurons for Task One, 9 neurons for Task Two and 12 neurons for Task Three. An output layer contains 2 neurons for Task One, 3 neurons for Task Two and 2 neurons for Task Three. The detailed structure and configuration of proposed RNN architecture are shown in TAB2. The neurons in input to hidden layer and hidden to output layer are fully connected. The proposed Recurrent Network is composed of recurrent layers, fully-connected layers, batch normalization layers and dropout layers. It contains the recurrent units/neurons. The units have self-connection/loops. This helps to carry out the previous time step information for the future time step. Batch Normalization and Regularization: To obviate overfitting and speed up the RNN model training, Dropout (0.001) BID18 and Batch Normalization BID19 was used in between fully-connected layers. A dropout removes neurons with their connections randomly. In our alternative architectures for Task 1, the recurrent networks could easily overfit the training data without regularization even when trained on large number samples. Classification: For classification, the final fully connected layer follows sigmoid activation function for Task One and Task Two, softmax for Task Three. The fully connected layer absorb the non-linear kernel and sigmoid layer output zero (benign) and output one (malicious), softmax provides the probability score for each class. The prediction loss for Task One and Task Two is estimated using binary cross entropy DISPLAYFORM0 where vector predicted probability is denoted by pd testing data set, ed is a vector of the expected class label, values are either 0 or 1.The prediction loss for Task Three is estimated using categorical-cross entropy DISPLAYFORM1 where pd true probability distribution, ed is predicted probability distribution. We have used sgd as an optimizer to minimize the loss of binary-cross entropy and categorical-cross entropy. We have evaluated the proposed RNN model against classical machine learning classifier SVM, on 3 different cybersecurity use cases.1.Identifying Android malware based on API information, 2.Incident Detection over unified threat management (UTM) operation on Unite Cloud, 3.Fraud Detection in financial transactions. The detailed of proposed RNN model on 3 different use cases are displayed in TAB3. In this paper performance of RNN Vs other classical machine learning classifiers are evaluated for cybersecuriy use cases such as Android malware classification, incident detection, and fraud detection. In all the three | Recurrent neural networks for Cybersecurity use-cases | 615 | scitldr |
Anatomical studies demonstrate that brain reformats input information to generate reliable responses for performing computations. However, it remains unclear how neural circuits encode complex spatio-temporal patterns. We show that neural dynamics are strongly influenced by the phase alignment between the input and the spontaneous chaotic activity. Input alignment along the dominant chaotic projections causes the chaotic trajectories to become stable channels (or attractors), hence, improving the computational capability of a recurrent network. Using mean field analysis, we derive the impact of input alignment on the overall stability of attractors formed. Our indicate that input alignment determines the extent of intrinsic noise suppression and hence, alters the attractor state stability, thereby controlling the network's inference ability. Brain actively untangles the input sensory data and fits them in behaviorally relevant dimensions that enables an organism to perform recognition effortlessly, in spite of variations;;. For instance, in visual data, object translation, rotation, lighting changes and so forth cause complex nonlinear changes in the original input space. However, the brain still extracts high-level behaviorally relevant constructs from these varying input conditions and recognizes the objects accurately. What remains unknown is how brain accomplishes this untangling. Here, we introduce the concept of chaos-guided input alignment in a recurrent network (specifically, reservoir computing model) that provides an avenue to untangle stimuli in the input space and improve the ability of a stimulus to entrain neural dynamics. Specifically, we show that the complex dynamics arising from the recurrent structure of a randomly connected reservoir;; can be used to extract an explicit phase relationship between the input stimulus and the spontaneous chaotic neuronal response. Then, aligning the input phase along the dominant projections determining the intrinsic chaotic activity, causes the random chaotic fluctuations or trajectories of the network to become locally stable channels or dynamic attractor states that, in turn, improve its' inference capability. In fact, using mean field analysis, we derive the effect of introducing varying phase association between the input and the network's spontaneous chaotic activity. Our demonstrate that successful formation of stable attractors is strongly determined from the input alignment. We also illustrate the effectiveness of input alignment on a complex motor pattern generation task with reliable generation of learnt patterns over multiple trials, even in presence of external perturbations. We describe the effect of chaos guided input alignment on a standard firing-rate based reservoir model of N interconnected neurons. Specifically, each neuron in the network is described by an where r i (t) = φ(x i (t)) represents the firing rate of each neuron characterized by the nonlinear response function, φ(x) = tanh(x) and τ = 10ms is the neuron time constant. W represents a sparse N × N recurrent weight matrix (with W ij equal to the strength of the synapse connecting unit j to unit i) chosen randomly and independently from a Gaussian distribution with 0 mean and variance, g 2 /p c N; van , where g is the synaptic gain parameter and p c is the connection probability between units. The output unit z reads out the activity of the network through the connectivity matrix, W Out, with initial values drawn from a Gaussian distribution with 0 mean and variance 1/N. The readout weights are trained using Recursive Least Square (RLS) algorithm;;. The input weight matrix, W Input, is drawn from a Gaussian distribution with zero mean and unit variance. The external input, I, is an oscillatory sinusoidal signal, I = I 0 cos(2πf t + χ), with amplitude I 0, frequency f, that is the same for each unit i. Here, we use a phase factor χ chosen randomly and independently from a uniform distribution between 0 and 2π. This ensures that the spatial pattern of input is not correlated with the recurrent connectivity initially. Through input alignment analysis, we then obtain the optimal phases to project the inputs in the preferred direction of the network's spontaneous or chaotic activity. In all our simulations (without loss of generality), throughout the paper we have assumed, p c = 0.1, N = 800, g = 1.5, f = 10Hz, unless, specified otherwise. First, we ask the question how is the subspace of input driven activity aligned with respect to the subspace of spontaneous or chaotic activity of a recurrent network. Using Principal Component Analysis (PCA), we observed that the input-driven trajectory converges to a uniform shape becoming more circular with increasing input amplitude (See Appendix A, Fig. A1 (a,b) ). We utilize the concept of principal angles, introduced in Rajan et al. (2010a); , to visualize the relationship between the chaotic and input driven (circular) subspace. Specifically, for two subspaces of dimension D 1 and D 2 defined by unit principal component vectors (that are mutually Fig. 1 (a) schematically represents the angle between the circular input driven network activity and the irregular spontaneous chaotic activity. Here, θ chaos (and θ driven) refers to the subspace defined by the first two Principal Components (PCs) of the intrinsic chaotic activity (and input driven activity). It is evident that rotating the circular orbit by θ rotate will align it along the chaotic trajectory projection. We observe that aligning the inputs in directions (along dominant PCs) that account for maximal variance in the chaotic spontaneous activity facilitates intrinsic noise suppression at relatively low input amplitudes, thereby, allowing the network to produce stable trajectories. For instance, instead of using random phase input, we set I = I 0 cos(2πf t + Θ) and visualize the network activity as shown in Fig. 1 (b). Even at lower amplitude of I 0 = 1.5, we observe a uniform circular orbit (in the PC subspace) for the network activity that is characteristic of reduction in intrinsic noise and input sensitization. In fact, even after the input is turned off after t = 50ms, the neural units yield stable and synchronized trajectories with minimal variation across different trials (Fig. 1 (b, Right)) in comparison to the random phase input driven network (of higher amplitude). Note, Appendix A (Fig. A1 (b) ) shows an example of PC activity for random phase input driven reservoir driven with high input amplitude I 0 = 5. This shows the effectiveness of subspace alignment for intrinsic noise suppression. In addition, working in low input-amplitude regimes offers an additional advantage of higher network dimensionality (refer to Appendix A Fig. A1 (c) for dimensionality discussion), that in turn improves the overall discriminative ability of the network. Note, previous work Rajan et al. (2010a; b) have shown that spatial structure of the input does not have a keen influence on the spatial structure of the network response. Here, we bring in this association explicitly with subspace alignment. Θ, in the above analysis, is the input phase that corresponds to a subspace rotation of driven activity toward spontaneous chaotic activity. We observe that the temporal phase of the input contributes to the neuronal activity in a recurrent network. Fig. 1 (c) illustrates this correlation wherein the input phase determines the orientation of the input-driven circular orbit with respect to the dominant sub-space of intrinsic chaotic activity. For a given input frequency (f = 10Hz), input phase, Θ = 83.2 •, aligns the driven activity (θ driven) along the chaotic activity (θ chaos) ing in θ rotate = 0 • for varying input amplitude (I 0 = 1.5, 3). An interesting observation here is that the frequency of the input modifies the orientation of the evoked response that yields different input phases at which θ chaos and θ driven are aligned (refer to Fig. 1 (c, Right) ). We also observe that the subspace alignment is extremely sensitive toward the input phase in certain regions with abrupt jumps and non-smooth correlation. This non-linear behavior is a consequence of the recurrent connectivity that overall shapes the complex interaction between the driving input and the intrinsic dynamics. While this correlation yields several important implications for neuro-biological computational experiments;; , we utilize this behavior for subspace alignment. Consequently, in all our experiments, for a given θ rotate, we find a corresponding input phase Θ that approximately aligns the input in the preferred direction. Next, we describe the implication of input alignment along the chaotic projections on the overall learning ability of the network. First, we trained a recurrent network (Fig. 2 (a)) with two output units to generate a timed response at t = 1s as shown in Fig. 2 (a, Right). Two distinct and brief sinusoidal inputs (of 50 ms duration and amplitude I 0 = 1.5) were used to stimulate the recurrent network. The network trajectories produced were then mapped to the output units using RLS training (to learn the weights W Out). Here, the network (after readout training) is expected to produce timed output dynamics at readout unit 1 or 2 in response to input I 1 or I 2, respectively. The network is reliable if it generates consistent response at the readout units across repeated presentations of the inputs during testing, across different trials. This simple experiment utilizes the fact that neural dynamics in a recurrent network implicitly encode timing that is fundamental to the processing and generation of complex spatio-temporal patterns. Note, in such cases of multiple inputs, values of both inputs are zero, except for a timing window during which one input is briefly turned on in a given trial. Since both the inputs, in the above experiment, have same amplitude and frequency dynamics, the circular orbit describing the network activity in the input-driven state (for both inputs) is almost similar giving rise to one principal angle (θ driven1,2 in Fig. 2 (b) ) for the input subspace. To discriminate between the output responses for the two inputs, it is apparent that the inputs have to be aligned in different directions. One obvious approach is to align each input along two different principal angles defining the chaotic spontaneous activity (i.e. I 1 along ∠P C1.P C2 and I 2 along ∠P C3.P C4). Note, ∠P C1.P C2 denotes the angle θ calculated using Eqn. 2. Another approach is to align I 1 along ∠P C1.P C2 ≡ θ chaos and I 2 along ∠P C1.P C2 + 90 • ≡ θ chaos,90 • as shown in Fig. 2 (b). We analyze the latter approach in detail as it involves input phase rotation in one subspace that makes it easier for formal theoretical analysis. To characterize the discriminative performance of the network, we evaluated the Euclidean distances to measure the inter-and intra-input trajectories. Inter-input trajectory distance is measured in response to different inputs (I 1, I 2). Intra-input trajectory distance is measured in response to the clean input (say, I 1 = I 0 cos(2πf t + Θ 1)) and a slightly varied version of the same input (for instance, I 1,δ = (I 0 +)cos(2πf t + Θ 1). Here, is a random number between [0, 0.5]) and Θ 1 (Θ 2) is the input phase that aligns I 1 (I 2) along θ chaos (θ chaos,90 •). The Euclidean distance is calculated as, where r 1 (t) (r 2 (t)) is the firing rate activity of the network corresponding to I 1 (I 2). Note, for intra-input evaluation (say, for I 1), the distance is measured between firing rates corresponding to inputs I 1 and I 1,δ. The inter-/intra-input trajectory distances are plotted in Fig. 2 (c) for scenarios-with and without input alignment. It is desirable to have larger inter-trajectory distance and small intra-trajectory distance such that the network easily distinguishes between two inputs while being able to reproduce the required output response even when a particular input is slightly perturbed. We observe that aligning the inputs in direction parallel and perpendicular to the dominant projections (i.e. I 1 along θ chaos, I 2 along θ chaos,90 • as in Fig. 2 (c, Middle) ) increases the inter-trajectory distance compared to the non-aligned case (Fig. 2 (c, Left) ) while decreasing the intra-input trajectory separation. This further ascertains the fact that subspace alignment reduces intrinsic fluctuations within a network thereby enhancing its discrimination capability. Note, without input alignment, the intrinsic fluctua-tions cannot be overcome with low-amplitude inputs (I 0 = 1.5). Hence, for fair comparison and to obtain stable readout-trainable trajectory in the non-aligned case, we use a higher input amplitude of I 0 = 3. We hypothesize that intrinsic noise suppression occurs as input subspace alignment along dominant projections (that account for maximal variance such as P C1, P C2) causes chaotic trajectories along different directions (in this case, along θ chaos, θ chaos,90 •) to become locally stable channels or attractor states. These attractors behave as potential wells (or local minima from an optimization standpoint) toward which the network activity converges for different inputs. Thus, the successful formation of stable yet distinctive attractors for different inputs are strongly influenced by the orientation along which the inputs are aligned. As a consequence of our hypothesis, depending upon the orientation of the input with respect to the dominant chaotic activity (θ chaos in Fig. 2 (b) ), the extent of noise suppression will vary that will eventually alter the stability of the attractor states. To test this, we rotated I 2 (from θ chaos,90 •) further by 90 • (θ chaos,180 • in Fig. 2 (b) ) and monitored the intra-trajectory distance. In Fig. 2 (c, Middle) corresponding to 90 • phase difference between I 1, I 2 (I 1 along θ chaos, I 2 along θ chaos,90 •), I 2 corresponds to a more stable attractor since its intradistance is lower than I 1. In contrast, in Fig. 2 (c, Right) corresponding to 180 • phase difference (I 1 along θ chaos, I 2 along θ chaos,180 •), I 1 turns out be more stable than I 2. Note, the 90 •, 180 • phase difference between I 1, I 2 (mentioned above and in the remainder of the paper) refers to the phase difference between the inputs in the chaotic subspace after subspace alignment using Θ. For our analysis, • phase between I 1, I 2 in chaotic subspace, while In addition to the trajectory distance, visualizing the network activity in the 3-D PC space (Fig. 2 (d) ), also, shows the influence of input orientation (and hence the phase correlation) toward formation of distinct attractor states. Since I 1, I 2 are aligned in the subspace defined by ∠P C1.P C2, the 2D projection of the circular orbit onto PC1 and PC2 in both input aligned scenarios (90 •, 180 • phase) are comparable. However, the third dimension, PC3, marks the difference between the two input projections. In fact, the progress of the network activity as time evolves (shown by dashed arrows in Fig. 2 (d) ) follows a completely different cycle for the input aligned scenarios. The change in the overall rotation cycle from anti-clockwise (I 2 with 90 • phase, Fig. 2 (d, Middle) ) to clockwise (I 2 with 180 • phase, Fig. 2 (d, Right) ) can be viewed as an indication toward the altering of the attractor state stability. On the other hand, the non-aligned case with I 0 = 3 yields incoherent and more random trajectory (Fig. 2 (d, Left) representative of intrinsic noise. In order to get more coherent activity and to suppress the noise further, we need to increase the input amplitude to I 0 ≥ 5 as shown in Appendix A (Fig. A1 (a,b) ). To explain the above analytically, we use mean-field methods developed to evaluate the properties of random network models in the limit N → ∞ Rajan et al. (2010b);. A key quantity in Mean Field Theory (MFT) is the average autocorrelation function that characterizes the interaction within the network as where <> denotes the time average. The main idea of MFT is to replace the network interaction term in Eqn. 1 by Gaussian noise η such that x i 0 (t) = Acos(2πf t + ζ) with A = I 0 / 1 + (2πf t) 2. Here, ζ incorporates the averaged temporal phase relationship between the reservoir neurons and the input induced by input subspace alignment, ζ(θ rotate) = Θ. The temporal correlation of η is calculated self-consistently from C(τ). For selfconsistence, the first and second moment of η must match the moments of the network interaction term. Thus, we get < η i (t) >= 0 as mean of the recurrent synaptic matrix < W ij >= 0. For calculating the second moment, we use the identity < W ij W kl >= g 2 δ ij δ kl /N and obtain < η i (t)η j (t + τ) >= g 2 C(τ). Combining this with the MFT noise-interaction based network equation yields where ∆(τ) =< x i 1 (t)x i 1 (t+τ) >. Eqn. 4 resembles the Newtonian motion equation of a classical particle moving under the influence of force given by the right hand side of the equation. This force depends on C that, in turn, depends on the input subspace alignment (ζ) which directs the initial position of the particle (or state of the network ∆). From this analogy, it is evident that analyzing the overall potential energy function of the particle (or network) will be equivalent to visualizing the different attractor states formed in a network in response to a particular input stimulus. Thus, we formulated an expression for the correlation function (with certain constraints) using Taylor series expansion, that allows us to derive the force and hence the dynamics of the network under various input alignment conditions. The non-linear firing rate function r(x) = φ(x) = tanh(gx) can be expanded with Taylor series for small values of g, i.e. g = 1 + δ, where δ denotes a small increment in g beyond 1. Note, g = 1 + δ satisfies the criterion, g > 1; , to operate the networks in chaotic regime. Also, the overall network statistics does not change with g being expressed as a gain factor in the firing-rate function instead of overall synaptic strength. Using tanh(gx) gx − 1/3g 3 x 3 + 2/15g 5 x 5, we can express C(τ) from Eqn. 3 as Now, we can express Eqn. 4 as Writing l = kδ due to the small limit of g, Eqn. 4 simplifies to where G = g 2 A 2 cos(ωτ + 2ζ)/(2δ 3) and n is a parameter defined in terms of m, δ. Appendix B provides a detailed derivation of Eqn. 5 and comments about the assumptions on initial conditions. Note, Eqn. 5 is an approximate version of Eqn. 4 that depicts network activity in the manner of Newtonian motion independent of all intrinsic time (or averaging parameters) while taking into account the influence of input alignment. Now, we can express the potential of the network driven by a force, F, equivalent to the right hand side of Eqn. 5 as We solve Eqn. 5, 6 with initial conditions k = 1,k = 0 and monitor the change in force, F, and potential, V, for different values of G. First, let us examine the attractor state formation when there is no input stimulus (i.e. G = 0) by visualizing the potential V. For G = 0, the expressions for force and potential become Fig. 3 shows the evolution of potential energy as k varies for different G. When input G = 0, the network dynamics is chaotic that in the formation of potential wells that are both equally stable. The network activity will thus converge to any one of these wells (that can be interpreted as attractor states) depending upon the initial state or starting point. This supports the observation that a network with no input yields chaotic activity with incoherent and irregular trajectory for every trial (see Fig. A1 (a) in Appendix A for reference). For nonzero G, the force (and potential) equation will be dependent on ζ since G cos(ωτ + 2ζ). For different values of ζ, we solved for V (Eqn. 6) numerically and plotted the potential evolution as shown in Fig. 3. For ζ = π/4, the potential well is more attractive on the left end. This validates the fact that intrinsic fluctuations are suppressed in the presence of an input. For ζ = π/2, the left attractor becomes more stable. Changing ζ further shows that the potential well on the right end becomes more stable. This confirms that input subspace alignment with respect to the initial chaotic state influences the overall stability and convergence capability of a recurrent network. The fact that stability corresponding to different attractor states (ζ = π/2, π) arises, qualifies our earlier hypothesis that input orientation with respect to the chaotic subspace alters the attractor state stability, corroborating the of Fig. 2 (c). Note, we solved Eqn. 6 by setting some initial and boundary value conditions on k and by iterating over different n until we reached a steady state solution. Changing these conditions will in a completely new set of ζ values (different from those in Fig. 3). Nevertheless, we will observe a similar evolution of the potential well and change in attractor state stability as Fig. 3. Furthermore, the MFT calculations use ζ to denote a functional relationship between subspace alignment and input phase that eventually affects the attractor state stability. In the future, we will examine the real-time evaluation of ζ and its' impact on the analytical studies. Finally, the constraint under which we derive the potential energy functions and show the altering of attractor state is g = 1 + δ. We expect all our to be valid for large g as well since Eqn. 4 (that was simplified with Taylor expansion) still remains unchanged. Finally, we illustrate the effectiveness of input alignment on a complex motor pattern generation task. We trained a recurrent network to generate the handwritten words "chaos" and "neuron" in response to two different inputs 1. After obtaining the principal angle of the chaotic spontaneous activity, we aligned the input I 1 corresponding to "chaos" along ∠P C1.P C2 using optimal input phase Θ. Then, we monitored the output activity for different orientation (i.e. 90 •, 180 •) of input I 2, corresponding to "neuron", with respect to I 1 in the chaotic subspace. The two output units (representing the x and y axes) were trained using RLS to trace the original target locations (x(t), y(t)) of the handwritten patterns at each time instant. Fig. 4 (a) shows the handwritten patterns generated by the network across 10 test trials for the scenario when inputs are aligned at 90 • in the chaotic subspace. We observe similar robust patterns generated for 180 • phase as well. The notable feature of input alignment is that the chaotic trajectories become locally stable channels and function as dynamic attractor states. However, external perturbation can induce more chaos in the reservoir that will overwhelm the stable patterns of activity. To test the susceptibility of the dynamic attractor states formed with input alignment to external perturbation, we introduced random Gaussian noise onto a trained model along with the standard intrinsic chaos-aligned inputs during testing. The injection of noise alters the net external current received by the neuronal units (I = Σ i [W Input i I + N 0 rand(i)], where N 0 is the noise amplitude, i denotes a neural unit in the reservoir and rand is the random Gaussian distribution). Fig. 4 (b) shows the mean squared error (calculated as the average Euclidean distance between the target (x, y) and the actual output produced at different time instants, averaged across 20 test trials) of the network for varying levels of noise. As N 0 increases, we observe a steady increase in the error value implying degradation in the prediction capability of the network. However, for moderate noise (with N 0 < 0.01), the network exhibits high robustness with negligible degradation in prediction capability for both the words. Interestingly, for 90 • phase difference, "neuron" is more stable than "chaos" with increased reproducibility across different trials even with more noise (N 0 = 0.2). In contrast, for 180 • phase, "chaos" is less sensitive to noise (Fig. 4 (b) ). On the other hand, for 45 • phase alignment between I 1, I 2 in the chaotic subspace, we observe that the network is sensitive even toward slight perturbation (N 0 = 0.001). This implies that the attractor states formed, in this case, are very unstable. This further corroborates the fact that the extent of noise suppression and hence the attractor state stability varies based upon the input alignment. Fig. 4(c) shows the handwritten pattern generated in one test trial for different phase alignment between I 1, I 2, when I 1 is aligned along the principal angle defining the spontaneous chaotic activity of the network. It is noteworthy to mention that the neural trajectories of the recurrent units corresponding to both cases are stable. In fact, we observe in the 90 • case, the trajectories of neurons responding to I 1 that corresponds to output "chaos" become slightly divergent and incoherent beyond 1000ms. In contrast, the trajectories of units responding to the word "neuron" are more synergized and coherent throughout the 1500ms time period of simulation. This indicates that the network activity for "neuron" converges to a more stable attractor state than "chaos". As a , we see that the network is more robust while reproducing "neuron" even in presence of external perturbation (N 0, noise amplitude is 0.2). In the 180 • phase difference case, we see exactly opposite stability phenomena with "chaos" converging to more stable attractor. Models of cortical networks often use diverse plasticity mechanisms for effective tuning of recurrent connections to suppress the intrinsic chaos (or fluctuations);. We show that input alignment alone produces stable and repeatable trajectories, even, in presence of variable internal neuronal dynamics for dynamical computations. Combining input alignment with recurrent synaptic plasticity mechanism can further enable learning of stable correlated network activity at the output (or readout layer) that is resistant to external perturbation to a large extent. Furthermore, since input subspace alignment allows us to operate networks at low amplitude while maintaining a stable network activity, it provides an additional advantage of higher dimensionality. A network of higher dimensionality offers larger number of disassociated principal chaotic projections along which different inputs can be aligned (see Appendix A, Fig. A1(c) ). Thus, for a classification task, wherein the network has to discriminate between 10 different inputs (of varying frequencies and underlying statistics), our notion of untangling with chaos-guided input alignment can, thus, serve as a foundation for building robust recurrent networks with improved inference ability. Further investigation is required to examine which orientations specifically improve the discrimination capability of the network and the impact of a given alignment on the stability of the readout dynamics around an output target. In summary, the analyses we present suggest that input alignment in the chaotic subspace has a large impact on the network dynamics and eventually determines the stability of an attractor state. In fact, we can control the network's convergence toward different stable attractor channels during its voyage in the neural state space by regulating the input orientation. This indicates that, besides synaptic strength variance , a critical quantity that might be modified by modulatory and plasticity mechanisms controlling neural circuit dynamics is the input stimulus alignment. To examine the structure of the recurrent network's representations, we visualize and compare the neural trajectories in response to varying inputs using Principal Component Analysis (PCA) Rajan et al. (2010a). The network state at any given time instant can be described by a point in the Ndimensional space with coordinates corresponding to the firing rates of the N neuronal units. With time, the network activity traverses a trajectory in this N-dimensional space and we use PCA to outline the subspace in which this trajectory lies. To conduct PCA, we diagonalize the equal-time cross-correlation matrix of the firing rates of the N units as where the angle brackets, <>, denote time average and r(t) denotes the firing rate activity of the neuron. The eigenvalues of the matrix D (specifically, λ a / N a=1 λ a, where λ a is the eigenvalue corresponding to principal component a) indicate the contribution of different Principal Components (PCs) toward the fluctuations/total variance in the spontaneous activity of the network. Fig. A1 shows the impact of varying input amplitude (I 0) on the spontaneous chaotic activity of the network. For I 0 = 0, the network is completely chaotic as is evident from the highly variable projections of the network activity onto different Principal Components (PCs) (see Fig. A1 (a, Left) ).Generally, the leading 10−15% (depending upon the value of g) of the PCs account for ∼ 95% of the network's chaotic activity Rajan et al. (2010a). Visualizing the network activity in a 3D space composed of the dominant principal components (PC1, 2, 3) shows a random and irregular trajectory characteristic of chaos (Fig. A1 (a, Middle) ). In fact, plotting the trajectories (firing rate r(t) of the neuron as time evolves) of 5 recurrent units in the network (Fig. A1 (a, Right) ) shows diverging and incoherent activity across 10 different trials, also, representative of intrinsic chaos. In addition, the projections of the network activity onto components with smaller variances, such as PC50, fluctuate more rapidly and irregularly (Fig. A1 (a, Left) ). This further corroborates the fact that the leading PCs (such as, PC1-PC15) define a network's spontaneous chaotic activity. Driving the recurrent network with a sinusoidal input of high amplitude (Fig. A1 (b) ) sensitizes the network toward the input, thereby, suppressing the intrinsic chaotic fluctuations. The PC projections of the network activity are relatively periodic. A noteworthy observation here is that the trajectories of the recurrent units (Fig. A1 (b, Right) become more stable and consistent across 10 different presentations of the input pattern with increasing amplitude. A readout layer appended to a recurrent network can be easily trained on these stable trajectories for a particular task. Thus, the input amplitude determines the network's encoding trajectories and in turn, its' inference ability. In fact, the chaotic intrinsic activity is completely suppressed for larger inputs. However, this is not preferred as input dominance drastically declines the discriminative ability of a network that can be justified by dimensionality measurements. The effective dimensionality of a reservoir is calculated as −1 that provides a measure of the effective number of PCs describing a network's activity for a given input stimulus condition. Fig. A1 (c) illustrates how the effective dimensionality decreases with increasing input amplitude for different g values. It is, hence, critical that input drive be strong enough to influence network activity while not overriding the intrinsic chaotic dynamics to enable the network to operate at the edge of chaos. Note, higher g in Fig. A1 (c) yields a larger dimensionality due to richer chaotic activity. In our simulations in Fig. A1 (b), the input is shown for 50ms starting at t = 0. Thus, we observe that the trajectories of the recurrent units are chaotic until the input is turned on. Although the network returns to spontaneous chaotic fluctuations when the input is turned off (at t = 50ms), we observe that the network trajectories are stable and non-chaotic that is in coherence with the previous findings from Bertschinger & Natschläger; Rajan et al. (2010b). From the visualization of network activity in the dominant PC space, we see that the input-driven trajectory converges to a uniform shape becoming more circular (along PC1 and PC2 dimensions) with higher input amplitude (Fig. A1 (b, Middle) ). This informs us that the orbit describing the network activity in the input-driven state consists of a circle in a two-dimensional subspace of the full N-dimensional hyperspace of the neuronal activities (supporting the schematic depiction of driven and chaotic subspaces in Fig. 1 (a) ). Note, all simulations in Appendix are conducted with similar parameters mentioned in the main text, i.e., N = 800, f = 10Hz, p c = 0.1. First, let us solve for x i 1 such that we can get an expression for the correlation function, C(τ) in Eqn. 3 of main text. Noting that, x i 1 is driven by Gaussian noise (as indicated by the MFT noiseinteraction equation:, we can assume their moments as < x i 1 (t) >=< x i 1 (t + τ) >= 0, < x i 1 (t)x i 1 (t) >=< x i 1 (t + τ)x i 1 (t + τ) >= ∆ and < x i 1 (t)x i 1 (t + τ) >= ∆(τ). x 1 (t) (dropping index i as all neuronal variables have similar statistics) can then be written as x 1 (t) = αz 1 + βz 3; x 1 (t + τ) = αz 2 + γz 3 where z 1, z 2, z 3 are Gaussian random variables with 0 mean/unit variance and α = ∆ − |∆(τ)|, β = sgn(∆(τ)) |∆(τ)|, γ = |∆(τ)|. Now, writing x = x 0 + x 1, C is computed by integrating over z 1, z 2, z 3 as << φ(x i 0 (t) + αz 1 + βz 3 ) > z1 < φ(x i 0 (t + τ)) + αz 2 + γz 3 > z2 > z3 where < f (z) for z = z 1, z 2, z 3. Now, x i 0 (t) = Acos(2πf t + ζ), where A = I 0 / 1 + (2πf t) 2 (solve dxi 0 dt = −x i 0 + I 0 cos(2πf t + ζ) for x i 0 ) and ζ incorporates the averaged temporal phase relationship between the individual neurons and the input induced by input subspace alignment. Replacing the value of x i 0 in Eqn. 10, we get <<< φ(Acos(ζ) + αz 1 + βz 3 ) > z1 ) < φ(Acos(ωτ + ζ) + αz 2 + γz 3 > z2 > z3 ) > ζ The above correlation function also satisfies Eqn. 4 of main text. Note, ω = 2πf in Eqn. 11. Now we solve Eqn. 11 using Taylor series approximation for tanh(gx) = gx−1/3g 3 x 3 +2/15g 5 x 5. We have φ(Acos(ζ) + αz 1 + βz 3 ) = g(Acos(ζ) + αz 1 + βz 3 ) − 1/3g 3 (αz 1 + βz 3) 3 + 2/15g 5 (αz 1 + βz 3) | Input Structuring along Chaos for Stability | 616 | scitldr |
Generative adversarial networks are a learning framework that rely on training a discriminator to estimate a measure of difference between a target and generated distributions. GANs, as normally formulated, rely on the generated samples being completely differentiable w.r.t. the generative parameters, and thus do not work for discrete data. We introduce a method for training GANs with discrete data that uses the estimated difference measure from the discriminator to compute importance weights for generated samples, thus providing a policy gradient for training the generator. The importance weights have a strong connection to the decision boundary of the discriminator, and we call our method boundary-seeking GANs (BGANs). We demonstrate the effectiveness of the proposed algorithm with discrete image and character-based natural language generation. In addition, the boundary-seeking objective extends to continuous data, which can be used to improve stability of training, and we demonstrate this on Celeba, Large-scale Scene Understanding (LSUN) bedrooms, and Imagenet without conditioning. Generative adversarial networks (GAN, BID7 involve a unique generative learning framework that uses two separate models, a generator and discriminator, with opposing or adversarial objectives. Training a GAN only requires back-propagating a learning signal that originates from a learned objective function, which corresponds to the loss of the discriminator trained in an adversarial manner. This framework is powerful because it trains a generator without relying on an explicit formulation of the probability density, using only samples from the generator to train. GANs have been shown to generate often-diverse and realistic samples even when trained on highdimensional large-scale continuous data BID31 . GANs however have a serious limitation on the type of variables they can model, because they require the composition of the generator and discriminator to be fully differentiable. With discrete variables, this is not true. For instance, consider using a step function at the end of a generator in order to generate a discrete value. In this case, back-propagation alone cannot provide the training signal, because the derivative of a step function is 0 almost everywhere. This is problematic, as many important real-world datasets are discrete, such as character-or word-based representations of language. The general issue of credit assignment for computational graphs with discrete operations (e.g. discrete stochastic neurons) is difficult and open problem, and only approximate solutions have been proposed in the past BID2 BID8 BID10 BID14 BID22 BID40. However, none of these have yet been shown to work with GANs. In this work, we make the following contributions:• We provide a theoretical foundation for boundary-seeking GANs (BGAN), a principled method for training a generator of discrete data using a discriminator optimized to estimate an f -divergence BID29 BID30. The discriminator can then be used to formulate importance weights which provide policy gradients for the generator.• We verify this approach quantitatively works across a set of f -divergences on a simple classification task and on a variety of image and natural language benchmarks.• We demonstrate that BGAN performs quantitatively better than WGAN-GP BID9 in the simple discrete setting.• We show that the boundary-seeking objective extends theoretically to the continuous case and verify it works well with some common and difficult image benchmarks. Finally, we show that this objective has some improved stability properties within training and without. In this section, we will introduce boundary-seeking GANs (BGAN), an approach for training a generative model adversarially with discrete data, as well as provide its theoretical foundation. For BGAN, we assume the normal generative adversarial learning setting commonly found in work on GANs BID7, but these ideas should extend elsewhere. Assume that we are given empirical samples from a target distribution, {x DISPLAYFORM0, where X is the domain (such as the space of images, word-or character-based representations of natural language, etc.). Given a random variable Z over a space Z (such as m ), we wish to find the optimal parameters,θ ∈ R d, of a function, G θ: Z → X (such as a deep neural network), whose induced probability distribution, Q θ, describes well the empirical samples. In order to put this more succinctly, it is beneficial to talk about a probability distribution of the empirical samples, P, that is defined on the same space as Q θ. We can now consider the difference measure between P and Q θ, D(P, Q θ), so the problem can be formulated as finding the parameters: DISPLAYFORM1 Defining an appropriate difference measure is a long-running problem in machine learning and statistics, and choosing the best one depends on the specific setting. Here, we wish to avoid making strong assumptions on the exact forms of P or Q θ, and we desire a solution that is scalable and works with very high dimensional data. Generative adversarial networks (GANs, BID7 fulfill these criteria by introducing a discriminator function, D φ : X → R, with parameters, φ, then defining a value function, DISPLAYFORM2 where samples z are drawn from a simple prior, h(z) (such as U or N). Here, D φ is a neural network with a sigmoid output activation, and as such can be interpreted as a simple binary classifier, and the value function can be interpreted as the negative of the Bayes risk. GANs train the discriminator to maximize this value function (minimize the mis-classification rate of samples coming from P or Q θ), while the generator is trained to minimize it. In other words, GANs solve an optimization problem: DISPLAYFORM3 Optimization using only back-propogation and stochastic gradient descent is possible when the generated samples are completely differentiable w.r.t. the parameters of the generator, θ. In the non-parametric limit of an optimal discriminator, the value function is equal to a scaled and shifted version of the Jensen-Shannon divergence, 2 * D JSD (P||Q θ) − log 4, 1 which implies the generator is minimizing this divergence in this limit. f -GAN BID30 generalized this idea over all f -divergences, which includes the Jensen-Shannon (and hence also GANs) but also the Kullback-Leibler, Pearson χ 2, and squared-Hellinger. Their work provides a nice formalism for talking about GANs that use f -divergences, which we rely on here. Definition 2.1 (f -divergence and its dual formulation). Let f: R + → R be a convex lower semicontinuous function and f: C ⊆ R → R be the convex conjugate with domain C. Next, let T be an arbitrary family of functions, T = {T : X → C}. Finally, let P and Q be distributions that are completely differentiable w.r.t. the same Lebesgue measure, µ.2 The f -divergence, D f (P||Q θ), generated by f, is bounded from below by its dual representation BID29, DISPLAYFORM4 The inequality becomes tight when T is the family of all possible functions. The dual form allows us to change a problem involving likelihood ratios (which may be intractable) to an maximization problem over T. This sort of optimization is well-studied if T is a family of neural networks with parameters φ (a.k.a., deep learning), so the supremum can be found with gradient ascent BID30.Definition 2.2 (Variational lower-bound for the f -divergence). Let T φ = ν •F φ be a function, which is the composition of an activation function, ν: R → C and a neural network, F φ: X → R. We can write the variational lower-bound of the supremum in Equation 4 as 3: DISPLAYFORM5 Maximizing Equation 5 provides a neural estimator of f -divergence, or neural divergence BID12. Given the family of neural networks, T Φ = {T φ} φ∈Φ, is sufficiently expressive, this bound can become arbitrarily tight, and the neural divergence becomes arbitrarily close to the true divergence. As such, GANs are extremely powerful for training a generator of continuous data, leveraging a dual representation along with a neural network with theoretically unlimited capacity to estimate a difference measure. For the remainder of this work, we will refer to T φ = ν •F φ as the discriminator and F φ as the statistic network (which is a slight deviation from other works). We use the general term GAN to refer to all models that simultaneously minimize and maximize a variational lower-bound, V(P, Q θ, T φ), of a difference measure (such as a divergence or distance). In principle, this extends to variants of GANs which are based on integral probability metrics (IPMs, BID36) that leverage a dual representation, such as those that rely on restricting T through parameteric regularization or by constraining its output distribution BID37. Here we will show that, with the variational lower-bound of an f -divergence along with a family of positive activation functions, ν: R → R +, we can estimate the target distribution, P, using the generated distribution, Q θ, and the discriminator, T φ.Theorem 1. Let f be a convex function and T ∈ T a function that satisfies the supremum in Equation 4 in the non-parametric limit. Let us assume that P and Q θ (x) are absolutely continuous w.r.t. a measure µ and hence admit densities, p(x) and q θ (x). Then the target density function, p(x), is equal to (∂f /∂T)(T (x))q θ (x). DISPLAYFORM0 Proof. Following the definition of the f -divergence and the convex conjugate, we have: DISPLAYFORM1 As f is convex, there is an absolute maximum when DISPLAYFORM2. Rephrasing t as a function, T (x), and by the definition of T (x), we arrive at the desired . Theorem 1 indicates that the target density function can be re-written in terms of a generated density function and a scaling factor. We refer to this scaling factor, w (x) = (∂f /∂T)(T (x)), as the optimal importance weight to make the connection to importance sampling 4. In general, an optimal discriminator is hard to guarantee in the saddle-point optimization process, so in practice, T φ will define a lower-bound that is not exactly tight w.r.t. the f -divergence. Nonetheless, we can define an estimator for the target density function using a sub-optimal T φ. Definition 2.3 (f -divergence importance weight estimator). Let f and f, and T φ (x) be defined as in Definitions 2.1 and 2.2 but where ν: DISPLAYFORM3 The non-negativity of ν is important as the densities are positive. TAB0 provides a set of fdivergences (following suggestions of BID30 with only slight modifications) which are suitable candidates and yield positive importance weights. Surprisingly, each of these yield the same function over the neural network before the activation function: w(x) = e F φ (x). 5 It should be noted thatp(x) is a potentially biased estimator for the true density; however, the bias only depends on the tightness of the variational lower-bound: the tighter the bound, the lower the bias. This problem reiterates the problem with all GANs, where proofs of convergence are only provided in the optimal or near-optimal limit BID7 BID30 BID23. As mentioned above and repeated here, GANs only work when the value function is completely differentiable w.r.t. the parameters of the generator, θ. The gradients that would otherwise be used to train the generator of discrete variables are zero almost everywhere, so it is impossible to train the generator directly using the value function. Approximations for the back-propagated signal exist BID2 BID8 BID10 BID14 BID22 BID40, but as of this writing, none has been shown to work satisfactorily in training GANs with discrete data. Here, we introduce the boundary-seeking GAN as a method for training GANs with discrete data. We first introduce a policy gradient based on the KL-divergence which uses the importance weights 4 In the case of the f -divergence used in BID7, the optimal importance weight equals DISPLAYFORM0 Note also that the normalized weights resemble softmax probabilities Algorithm 1. Discrete Boundary Seeking GANs (θ, φ) ← initialize the parameters of the generator and statistic network repeat DISPLAYFORM1 Compute the un-normalized and normalized importance weights (applied uniformly if P and Q θ are multi-variate) DISPLAYFORM2 Optimize the generator parameters until convergence as a reward signal. We then introduce a lower-variance gradient which defines a unique reward signal for each z and prove this can be used to solve our original problem. Policy gradient based on importance sampling Equation 7 offers an option for training a generator in an adversarial way. If we know the explicit density function, q θ, (such as a multivariate Bernoulli distribution), then we can, usingp(x) as a target (keeping it fixed w.r.t. optimization of θ), train the generator using the gradient of the KL-divergence: DISPLAYFORM3 Here, the connection to importance sampling is even clearer, and this gradient resembles other importance sampling methods for training generative models in the discrete setting BID3 BID33. However, we expect the variance of this estimator will be high, as it requires estimating the partition function, β (for instance, using Monte-Carlo sampling). We address reducing the variance from estimating the normalized importance weights next. Lower-variance policy gradient Let q θ (x) = Z g θ (x | z)h(z)dz be a probability density function with a conditional density, DISPLAYFORM4 be a Monte-Carlo estimate of the normalized importance weights. The gradient of the expected conditional KL-divergence w.r.t. the generator parameters, θ, becomes: DISPLAYFORM0 where we have approximated the expectation using the Monte-Carlo estimate. Minimizing the expected conditional KL-divergences is stricter than minimizing the KL-divergence in Equation 7, as it requires all of the conditional distributions to match independently. We show that the KL-divergence of the marginal probabilities is zero when the expectation of the conditional KL-divergence is zero as well as show this estimator works better in practice in the Appendix. Algorithm 1 describes the training procedure for discrete BGAN. This algorithm requires an additional M times more computation to compute the normalized importance weights, though these can be computed in parallel exchanging space for time. When the P and Q θ are multi-variate (such as with discrete image data), we make the assumption that the observed variables are independent conditioned on Z. The importance weights, w, are then applied uniformly across each of the observed variables. Connection to policy gradients REINFORCE is a common technique for dealing with discrete data in GANs BID4 BID20. Equation 9 is a policy gradient in the special case that the reward is the normalized importance weights. This reward approaches the likelihood ratio in the non-parametric limit of an optimal discriminator. Here, we make another connection to REINFORCE as it is commonly used, with baselines, by deriving the gradient of the reversed KL-divergence. Definition 2.4 (REINFORCE-based BGAN). Let T φ (x) be defined as above where DISPLAYFORM1. Consider the gradient of the reversed KL-divergence: DISPLAYFORM2 From this, it is clear that we can consider the output of the statistic network, F φ (x), to be a reward and b = log β = E Q θ [w(x)] to be the analog of a baseline. 6 This gradient is similar to those used in previous works on discrete GANs, which we discuss in more detail in Section 3. For continuous variables, minimizing the variational lower-bound suffices as an optimization technique as we have the full benefit of back-propagation to train the generator parameters, θ. However, while the convergence of the discriminator is straightforward, to our knowledge there is no general proof of convergence for the generator except in the non-parametric limit or near-optimal case. What's worse is the value function can be arbitrarily large and negative. Let us assume that max T = M < ∞ is unique. As f is convex, the minimum of the lower-bound over θ is: inf DISPLAYFORM0 In other words, the generator objective is optimal when the generated distribution, Q θ, is nonzero only for the set {x | T (x) = M }. Even outside this worst-case scenario, the additional consequence of this minimization is that this variational lower-bound can become looser w.r.t. the f -divergence, with no guarantee that the generator would actually improve. Generally, this is avoided by training the discriminator in conjunction with the generator, possibly for many steps for every generator update. However, this clearly remains one source of potential instability in GANs. Equation 7 reveals an alternate objective for the generator that should improve stability. Notably, we observe that for a given estimator,p(x), q θ (x) matches when w(x) = (∂f /∂T)(T (x)) = 1. Definition 2.5 (Continuous BGAN objective for the generator). Let G θ: Z → X be a generator function that takes as input a latent variable drawn from a simple prior, z ∼ h(z). Let T φ and w(x) be defined as above. We define the continuous BGAN objective as:θ = arg min θ (log w(G θ (z))) 2. We chose the log, as with our treatments of f -divergences in TAB0, the objective is just the square of the statistic network output:θ DISPLAYFORM1 This objective can be seen as changing a concave optimization problem (which is poor convergence properties) to a convex one. On estimating likelihood ratios from the discriminator Our work relies on estimating the likelihood ratio from the discriminator, the theoretical foundation of which we draw from f -GAN BID30. The connection between the likelihood ratios and the policy gradient is known in previous literature BID15, and the connection between the discriminator output and the likelihood ratio was also made in the context of continuous GANs BID26 BID39. However, our work is the first to successfully formulate and apply this approach to the discrete setting. Importance sampling Our method is very similar to re-weighted wake-sleep (RWS, BID3, which is a method for training Helmholtz machines with discrete variables. RWS also relies on minimizing the KL divergence, the gradients of which also involve a policy gradient over the likelihood ratio. Neural variational inference and learning (NVIL, BID25, on the other hand, relies on the reverse KL. These two methods are analogous to our importance sampling and REINFORCE-based BGAN formulations above. GAN for discrete variables Training GANs with discrete data is an active and unsolved area of research, particularly with language model data involving recurrent neural network (RNN) generators BID20. Many REINFORCE-based methods have been proposed for language modeling BID20 BID6 which are similar to our REINFORCE-based BGAN formulation and effectively use the sigmoid of the estimated loglikelihood ratio. The primary focus of these works however is on improving credit assignment, and their approaches are compatible with the policy gradients provided in our work. There have also been some improvements recently on training GANs on language data by rephrasing the problem into a GAN over some continuous space BID19 BID16 BID9. However, each of these works bypass the difficulty of training GANs with discrete data by rephrasing the deterministic game in terms of continuous latent variables or simply ignoring the discrete sampling process altogether, and do not directly solve the problem of optimizing the generator from a difference measure estimated from the discriminator. Remarks on stabilizing adversarial learning, IPMs, and regularization A number of variants of GANs have been introduced recently to address stability issues with GANs. Specifically, generated samples tend to collapse to a set of singular values that resemble the data on neither a persample or distribution basis. Several early attempts in modifying the train procedure (; BID35 as well as the identifying of a taxonomy of working architectures BID31 addressed stability in some limited setting, but it wasn't until Wassertstein GANs (WGAN, BID1 were introduced that there was any significant progress on reliable training of GANs. WGANs rely on an integral probability metric (IPM, BID36) that is the dual to the Wasserstein distance. Other GANs based on IPMs, such as Fisher GAN tout improved stability in training. In contrast to GANs based on f -divergences, besides being based on metrics that are "weak", IPMs rely on restricting T to a subset of all possible functions. For instance in WGANs, T = {T | T L ≤ K}, is the set of K-Lipschitz functions. Ensuring a statistic network, T φ, with a large number of parameters is Lipschitz-continuous is hard, and these methods rely on some sort of regularization to satisfy the necessary constraints. This includes the original formulation of WGANs, which relied on weight-clipping, and a later work BID9 which used a gradient penalty over interpolations between real and generated data. Unfortunately, the above works provide little details on whether T φ is actually in the constrained set in practice, as this is probably very hard to evaluate in the high-dimensional setting. Recently, BID32 introduced a gradient norm penalty similar to that in BID9 without interpolations and which is formulated in terms of f -divergences. In our work, we've found that this approach greatly improves stability, and we use it in nearly all of our . That said, it is still unclear empirically how the discriminator objective plays a strong role in stabilizing adversarial learning, but at this time it appears that correctly regularizing the discriminator is sufficient. We first verify the gradient estimator provided by BGAN works quantitatively in the discrete setting by evaluating its ability to train a classifier with the CIFAR-10 dataset BID17. The "generator" in this setting is a multinomial distribution, g θ (y | x) modeled by the softmax output of a neural network. The discriminator, T φ (x, y), takes as input an image / label pair so that the variational lower-bound is: DISPLAYFORM0 For these experiments, we used a simple 4-layer convolutional neural network with an additional 3 fully-connected layers. We trained the importance sampling BGAN on the set of f -divergences given in TAB0 as well as the REINFORCE counterpart for 200 epochs and report the accuracy on the test set. In addition, we ran a simple classification baseline trained on cross-entropy as well as a continuous approximation to the problem as used in WGAN-based approaches BID9. No regularization other than batch normalization (BN, BID13 was used with the generator, while gradient norm penalty BID32 was used on the statistic networks. For WGAN, we used clipping, and chose the clipping parameter, the number of discriminator updates, and the learning rate separately based on training set performance. The baseline for the REIN-FORCE method was learned using a moving average of the reward. Our are summarized in TAB1 . Overall, BGAN performed similarly to the baseline on the test set, with the REINFORCE method performing only slightly worse. For WGAN, despite our best efforts, we could only achieve an error rate of 72.3% on the test set, and this was after a total of 600 epochs to train. Our efforts to train WGAN using gradient penalty failed completely, despite it working with higher-dimension discrete data (see Appendix). Image data: binary MNIST and quantized CelebA We tested BGAN using two imaging benchmarks: the common discretized MNIST dataset BID34 ) and a new quantized version of the CelebA dataset (see BID21, for the original CelebA dataset).For CelebA quantization, we first downsampled the images from 64 × 64 to 32 × 32. We then generated a 16-color palette using Pillow, a fork of the Python Imaging Project (https://pythonpillow.org). This palette was then used to quantize the RGB values of the CelebA samples to a one-hot representation of 16 colors. Our models used deep convolutional GANs (DCGAN, BID31 . The generator is fed a vector of 64 i.i.d. random variables drawn from a uniform distribution,. The output nonlinearity was sigmoid for MNIST to model the Bernoulli centers for each pixel, while the output was softmax for quantized CelebA.Our show that training the importance-weighted BGAN on discrete MNIST data is stable and produces realistic and highly variable generated handwritten digits FIG0 ). Further quantitative experiments comparing BGAN against WGAN with the gradient penalty showed that when training a new discriminator on the samples directly (keeping the Right: Samples produced from the generator trained as a boundaryseeking GAN on the quantized CelebA for 50 epochs. Table 3 : Random samples drawn from a generator trained with the discrete BGAN objective. The model is able to successfully learn many important character-level English language patterns. And it 's miant a quert could he He weirst placed produces hopesi What 's word your changerg bette " We pait of condels of money wi Sance Jory Chorotic, Sen doesin In Lep Edger 's begins of a find", Lankard Avaloma was Mr. Palin, What was like one of the July 2 " I stroke like we all call on a Thene says the sounded Sunday in The BBC nothing overton and sleaWith there was a passes ipposing About dose and warthestrinds fro College is out in contesting rev And tear he jumped by even a roy generator fixed), the final estimated distance measures were higher (i.e., worse) for WGAN-GP than BGAN, even when comparing using the Wasserstein distance. The complete experiment and are provided in the Appendix. For quantized CelebA, the generator trained as a BGAN produced reasonably realistic images which resemble the original dataset well and with good diversity. Next, we test BGAN in a natural language setting with the 1-billion word dataset BID5, modeling at the character-level and limiting the dataset to sentences of at least 32 and truncating to 32 characters. For character-level language generation, we follow the architecture of recent work BID9, and use deep convolutional neural networks for both the generator and discriminator. Training with BGAN yielded stable, reliably good character-level generation (Table 3), though generation is poor compared to recurrent neural network-based methods BID38 BID24. However, we are not aware of any previous work in which a discrete GAN, without any continuous relaxation BID9, was successfully trained from scratch without pretraining and without an auxiliary supervised loss to generate any sensible text. Despite the low quality of the text relative to supervised recurrent language models, the demonstrates the stability and capability of the proposed boundary-seeking criterion for training discrete GANs. Here we present for training the generator on the boundary-seeking objective function. In these experiments, we use the original GAN variational lower-bound from BID7, only modifying the generator function. All use gradient norm regularization BID32 to ensure stability. We test here the ability of continuous BGAN to train on high-dimensional data. In these experiments, we train on the CelebA, LSUN BID42 datasets, and the 2012 ImageNet dataset with all 1000 labels BID18. The discriminator and generator were both modeled as 4-layer Resnets BID11 ) without conditioning on labels or attributes. Figure 3 shows examples from BGAN trained on these datasets. Overall, the sample quality is very good. Notably, our Imagenet model produces samples that are high quality, despite not being trained Published as a conference paper at ICLR 2018CelebA Imagenet LSUN Figure 3: Highly realistic samples from a generator trained with BGAN on the CelebA and LSUN datasets. These models were trained using a deep ResNet architecture with gradient norm regularization BID32 ). The Imagenet model was trained on the full 1000 label dataset without conditioning.conditioned on the label and on the full dataset. However, the story here may not be that BGAN necessarily generates better images than using the variational lower-bound to train the generator, since we found that images of similar quality on CelebA could be attained without the boundaryseeking loss as long as gradient norm regularization was used, rather we confirm that BGAN works well in the high-dimensional setting. As mentioned above, gradient norm regularization greatly improves stability and allows for training with very large architectures. However, training still relies on a delicate balance between the generator and discriminator: over-training the generator may destabilize learning and lead to worse . We find that the BGAN objective is resilient to such over-training. Stability in training with an overoptimized generator To test this, we train on the CIFAR-10 dataset using a simple DCGAN architecture. We use the original GAN objective for the discriminator, but vary the generator loss as the variational lower-bound, the proxy loss (i.e., the generator loss function used in BID7, and the boundary-seeking loss (BGAN). To better study the effect of these losses, we update the generator for 5 steps for every discriminator step. Our (Figure 4) show that over-optimizing the generator significantly degrades sample quality. However, in this difficult setting, BGAN learns to generate reasonable samples in fewer epochs than other objective functions, demonstrating improved stability. Following the generator gradient We further test the different objectives by looking at the effect of gradient descent on the pixels. In this setting, we train a DCGAN BID31 using the proxy loss. We then optimize the discriminator by training it for another 1000 updates. Next, we perform gradient descent directly on the pixels, the original variational lower-bound, the proxy, and the boundary seeking losses separately. Figure 4: Training a GAN with different generator loss functions and 5 updates for the generator for every update of the discriminator. Over-optimizing the generator can lead to instability and poorer depending on the generator objective function. Samples for GAN and GAN with the proxy loss are quite poor at 50 discriminator epochs (250 generator epochs), while BGAN is noticeably better. At 100 epochs, these models have improved, though are still considerably behind BGAN.Our show that following the BGAN objective at the pixel-level causes the least degradation of image quality. This indicates that, in training, the BGAN objective is the least likely to disrupt adversarial learning. Reinterpreting the generator objective to match the proposal target distribution reveals a novel learning algorithm for training a generative adversarial network (GANs, BID7 . This proposed approach of boundary-seeking provides us with a unified framework under which learning algorithms for both discrete and continuous variables are derived. Empirically, we verified our approach quantitatively and showed the effectiveness of training a GAN with the proposed learning algorithm, which we call a boundary-seeking GAN (BGAN), on both discrete and continuous variables, as well as demonstrated some properties of stability. Starting image (generated) 10k updates GAN Proxy GAN BGAN 20k updates Figure 5: Following the generator objective using gradient descent on the pixels. BGAN and the proxy have sharp initial gradients that decay to zero quickly, while the variational lower-bound objective gradient slowly increases. The variational lower-bound objective leads to very poor images, while the proxy and BGAN objectives are noticeably better. Overall, BGAN performs the best in this task, indicating that its objective will not overly disrupt adversarial learning. Berthelot, David, Schumm, Tom, and Metz, Luke. Began: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717, 2017. In these experiments, we produce some quantitative measures for BGAN against WGAN with the gradient penalty (WGAN-GP, BID9 on the discrete MNIST dataset. In order to use back-propagation to train the generator, WGAN-GP uses the softmax probabilities directly, bypassing the sampling process at pixel-level and problems associated with estimating gradients through discrete processes. Despite this, WGAN-GP is been able to produce samples that visually resemble the target dataset. Here, we train 3 models on the discrete MNIST dataset using identical architectures with the BGAN with the JS and reverse KL f -divergences and WGAN-GP objectives. Each model was trained for 300 generator epochs, with the discriminator being updated 5 times per generator update for WGAN-GP and 1 time per generator update for the BGAN models (in other words, the generators were trained for the same number of updates). This model selection procedure was chosen as the difference measure (i.e., JSD, reverse KL divergence, and Wasserstein distance) as estimated during training converged for each model. WGAN-GP was trained with a gradient penalty hyper-parameter of 5.0, which did not differ from the suggested 10.0 in our experiments with discrete MNIST. The BGAN models were trained with the gradient norm penalty of 5.0 BID32.Next, for each model, we trained 3 new discriminators with double capacity (twice as many hidden units on each layer) to maximize the the JS and reverse KL divergences and Wasserstein distance, keeping the generators fixed. These discriminators were trained for 200 epochs (chosen from convergence) with the same gradient-based regularizations as above. For all of these models, the discriminators were trained using the samples, as they would be used in practical applications. For comparison, we also trained an additional discriminator, evaluating the WGAN-GP model above on the Wasserstein distance using the softmax probabilities. Final evaluation was done by estimating difference measures using 60000 MNIST training examples againt 60000 samples from each generator, averaged over 12 batches of 5000. We used the training set as this is the distribution over which the discriminators were trained. Test set estimates in general were close and did not diverge from training set distances, indicating the discriminators were not overfitting, but training set estimates were slightly higher on average. Our show that the estimates from the sampling distribution from BGAN is consistently lower than that from WGAN-GP, even when evaluating using the Wasserstein distance. However, when training the discriminator on the softmax probabilities, WGAN-GP has a much lower Wasserstein distance. Despite quantitative differences, samples from these different models were indistinguishable as far as quality by visual inspection. This indicates that, though playing the adversarial game using the softmax outputs can generate realistic-looking samples, this procedure ultimately hurts the generator's ability to model a truly discrete distribution. Here we validate the policy gradient provided in Equation 10 theoretically and empirically. Theorem 2. Let the expectation of the conditional KL-divergence be defined as in Equation 9. Then E h(z) [D KL (p(x | z) g θ (x | z))] = 0 =⇒ D KL (p(x)||q θ ) = 0.Proof. As the conditional KL-divergence is has an absolute minimum at zero, the expectation can only be zero when the all of the conditional KL-divergences are zero. In other words: DISPLAYFORM0 As per the definition ofp(x | z), this implies that α(z) = w(x) = C is a constant. If w(x) is a constant, then the partition function β = CE Q θ = C is a constant. Finally, when w(x) β = 1, p(x) = q θ =⇒ D KL (p(x)||q θ ) = 0.In order to empirically evaluate the effect of using an Monte-Carlo estimate of β from Equation 8 versus the variance-reducing method in Equation 10, we trained several models using various sample sizes from the prior, h(z), and the conditional, g θ (x | z).We compare both methods with 64 samples from the prior and 5, 10, and 100 samples from the conditional. In addition, we compare to a model that estimates β using 640 samples from the prior and a single sample from the conditional. These models were all run on discrete MNIST for 50 epochs with the same architecture as those from Section 4.2 with a gradient penalty of 1.0, which was the minimum needed to ensure stability in nearly all the models. Our FIG1 ) show a clear improvement using the variance-reducing method from Equation 10 over estimating β. Wall-clock times were nearly identical for methods using the same number of total samples (blue, green, and red dashed and solid line pairs). Both methods improve as the number of conditional samples is increased.. α indicates the variance-reducing method, and β is estimating β using Monte-Carlo. z = indicates the number of samples from the prior, h(z), and x = indicates the number of samples from the conditional, g θ (x | z) used in estimation. Plotted are the estimated GAN distances (2 * JSD − log 4) from the discriminator. The minimum GAN distance, − log 4, is included for reference. Using the variance-reducing method gives a generator with consistently lower estimated distances than estimating β directly. | We address training GANs with discrete data by formulating a policy gradient that generalizes across f-divergences | 617 | scitldr |
Policy gradient methods have enjoyed great success in deep reinforcement learning but suffer from high variance of gradient estimates. The high variance problem is particularly exasperated in problems with long horizons or high-dimensional action spaces. To mitigate this issue, we derive a bias-free action-dependent baseline for variance reduction which fully exploits the structural form of the stochastic policy itself and does not make any additional assumptions about the MDP. We demonstrate and quantify the benefit of the action-dependent baseline through both theoretical analysis as well as numerical , including an analysis of the suboptimality of the optimal state-dependent baseline. The is a computationally efficient policy gradient algorithm, which scales to high-dimensional control problems, as demonstrated by a synthetic 2000-dimensional target matching task. Our experimental indicate that action-dependent baselines allow for faster learning on standard reinforcement learning benchmarks and high-dimensional hand manipulation and synthetic tasks. Finally, we show that the general idea of including additional information in baselines for improved variance reduction can be extended to partially observed and multi-agent tasks. Deep reinforcement learning has achieved impressive in recent years in domains such as video games from raw visual inputs BID10, board games, simulated control tasks BID16, and robotics ). An important class of methods behind many of these success stories are policy gradient methods BID28 BID22 BID5 BID18 BID11, which directly optimize parameters of a stochastic policy through local gradient information obtained by interacting with the environment using the current policy. Policy gradient methods operate by increasing the log probability of actions proportional to the future rewards influenced by these actions. On average, actions which perform better will acquire higher probability, and the policy's expected performance improves. A critical challenge of policy gradient methods is the high variance of the gradient estimator. This high variance is caused in part due to difficulty in credit assignment to the actions which affected the future rewards. Such issues are further exacerbated in long horizon problems, where assigning credits properly becomes even more challenging. To reduce variance, a "baseline" is often employed, which allows us to increase or decrease the log probability of actions based on whether they perform better or worse than the average performance when starting from the same state. This is particularly useful in long horizon problems, since the baseline helps with temporal credit assignment by removing the influence of future actions from the total reward. A better baseline, which predicts the average performance more accurately, will lead to lower variance of the gradient estimator. The key insight of this paper is that when the individual actions produced by the policy can be decomposed into multiple factors, we can incorporate this additional information into the baseline to further reduce variance. In particular, when these factors are conditionally independent given the current state, we can compute a separate baseline for each factor, whose value can depend on all quantities of interest except that factor. This serves to further help credit assignment by removing the influence of other factors on the rewards, thereby reducing variance. In other words, information about the other factors can provide a better evaluation of how well a specific factor performs. Such factorized policies are very common, with some examples listed below.• In continuous control and robotics tasks, multivariate Gaussian policies with a diagonal covariance matrix are often used. In such cases, each action coordinate can be considered a factor. Similarly, factorized categorical policies are used in game domains like board games and Atari.• In multi-agent and distributed systems, each agent deploys its own policy, and thus the actions of each agent can be considered a factor of the union of all actions (by all agents). This is particularly useful in the recent emerging paradigm of centralized learning and decentralized execution BID2 BID9. In contrast to the previous example, where factorized policies are a common design choice, in these problems they are dictated by the problem setting. We demonstrate that action-dependent baselines consistently improve the performance compared to baselines that use only state information. The relative performance gain is task-specific, but in certain tasks, we observe significant speed-up in the learning process. We evaluate our proposed method on standard benchmark continuous control tasks, as well as on a high-dimensional door opening task with a five-fingered hand, a synthetic high-dimensional target matching task, on a blind peg insertion POMDP task, and a multi-agent communication task. We believe that our method will facilitate further applications of reinforcement learning methods in domains with extremely highdimensional actions, including multi-agent systems. Videos and additional of the paper are available at https://sites.google.com/view/ad-baselines. Three main classes of methods for reinforcement learning include value-based methods BID26, policy-based methods BID28 BID5 BID18, and actor-critic methods BID6 BID14 BID11. Valuebased and actor-critic methods usually compute a gradient of the objective through the use of critics, which are often biased, unless strict compatibility conditions are met BID22 BID6. Such conditions are rarely satisfied in practice due to the use of stochastic gradient methods and powerful function approximators. In comparison, policy gradient methods are able to compute an unbiased gradient, but suffer from high variance. Policy gradient methods are therefore usually less sample efficient, but can be more stable than critic-based methods BID1.A large body of work has investigated variance reduction techniques for policy gradient methods. One effective method to reduce variance without introducing bias is through using a baseline, which has been widely studied BID21 BID27 BID3. However, fully exploiting the factorizability of the policy probability distribution to further reduce variance has not been studied. Recently, methods like Q-Prop BID4 ) make use of an action-dependent control variate, a technique commonly used in Monte Carlo methods and recently adopted for RL. Since Q-Prop utilizes off-policy data, it has the potential to be more sample efficient than pure on-policy methods. However, Q-prop is significantly more computationally expensive, since it needs to perform a large number of gradient updates on the critic using the off-policy data, thus not suitable with fast simulators. In contrast, our formulation of action-dependent baselines has little computational overhead, and improves the sample efficiency compared to on-policy methods with state-only baseline. The idea of using additional information in the baseline or critic has also been studied in other contexts. Methods such as Guided Policy Search BID13 and variants train policies that act on high-dimensional observations like images, but use a low dimensional encoding of the problem like joint positions during the training process. Recent efforts in multi-agent systems BID2 BID9 ) also use additional information in the centralized training phase to speed-up learning. However, using the structure in the policy parameterization itself to enhance the learning speed, as we do in this work, has not been explored. In this section, we establish the notations used throughout this paper, as well as basic for policy gradient methods, and variance reduction via baselines. This paper assumes a discrete-time Markov decision process (MDP), defined by (S, A, P, r, ρ 0, γ), in which S ⊆ R n is an n-dimensional state space, A ⊆ R m an m-dimensional action space, P: S × A × S → R + a transition probability function, r: S × A → R a bounded reward function, ρ 0: S → R + an initial state distribution, and γ ∈ a discount factor. The presented models are based on the optimization of a stochastic policy π θ: S × A → R + parameterized by θ. Let η(π θ) denote its expected return: DISPLAYFORM0, where τ = (s 0, a 0, . . .) denotes the whole trajectory, s 0 ∼ ρ 0 (s 0), a t ∼ π θ (a t |s t), and s t+1 ∼ P(s t+1 |s t, a t) for all t. Our goal is to find the optimal policy arg max θ η(π θ). We will useQ(s t, a t) to describe samples of cumulative discounted return, and Q(a t, s t) to describe a function approximation ofQ(s t, a t). We will use "Q-function" when describing an abstract action-value function. For a partially observable Markov decision process (POMDP), two more components are required, namely Ω, a set of observations, and O: S × Ω → R ≥0, the observation probability distribution. In the fully observable case, Ω ≡ S. Though the analysis in this article is written for policies over states, the same analysis can be done for policies over observations. An important technique used in the derivation of the policy gradient is known as the score function (SF) estimator BID28, which also comes up in the justification of baselines. Suppose that we want to estimate ∇ θ E x [f (x)] where x ∼ p θ (x), and the family of distributions {p θ (x): θ ∈ Θ} has common support. Further suppose that log p θ (x) is continuous in θ. In this case we have DISPLAYFORM0 The Policy Gradient Theorem BID22 states that DISPLAYFORM0 For convenience, define ρ π (s) = ∞ t=0 γ t p(s t = s) as the state visitation frequency, and DISPLAYFORM1 We can rewrite the above equation (with abuse of notation) as DISPLAYFORM2 It is further shown that we can reduce the variance of this gradient estimator without introducing bias by subtracting off a quantity dependent on s t fromQ(s t, a t) BID28 BID3. See Appendix A for a derivation of the optimal state-dependent baseline. DISPLAYFORM3 This is valid because, applying the SF estimator in the opposite direction, we have DISPLAYFORM4 In practice there can be rich internal structure in the policy parameterization. For example, for continuous control tasks, a very common parameterization is to make π θ (a t |s t) a multivariate Gaussian with diagonal variance, in which case each dimension a i t of the action a t is conditionally independent of other dimensions, given the current state s t. Another example is when the policy outputs a tuple of discrete actions with factorized categorical distributions. In the following subsections, we show that such structure can be exploited to further reduce the variance of the gradient estimator without introducing bias by changing the form of the baseline. Then, we derive the optimal action-dependent baseline for a class of problems and analyze the suboptimality of non-optimal baselines in terms of variance reduction. We then propose several practical baselines for implementation purposes. We conclude the section with the overall policy gradient algorithm with action-dependent baselines for factorized policies. We provide an exposition for situations when the conditional independence assumption does not hold, such as for stochastic policies with general covariance structures, in Appendix E, and for compatibility with other variance reduction techniques in Appendix F. In the following, we analyze action-dependent baselines for policies with conditionally independent factors. For example, multivariate Gaussian policies with a diagonal covariance structure are commonly used in continuous control tasks. Assuming an m-dimensional action space, we have DISPLAYFORM0 In this case, we can set b i, the baseline for the ith factor, to depend on all other actions in addition to the state. Let a t denote all dimensions other than i in a t and denote the ith baseline by b i (s t, a −i t). Due to conditional independence and the score function estimator, we have DISPLAYFORM0 Hence we can use the following gradient estimator DISPLAYFORM1 This is compatible with advantage function form of the policy gradient: DISPLAYFORM2 DISPLAYFORM3 Note that the policy gradient now comprises of m component policy gradient terms, each with a different advantage term. In Appendix E, we show that the methodology also applies to general policy structures (for example, a Gaussian policy with a general covariance structure), where the conditional independence assumption does not hold. The is bias-free albeit different baselines. In this section, we derive the optimal action-dependent baseline and show that it is better than the state-only baseline. We seek the optimal baseline to minimize the variance of the policy gradient estimate. First, we write out the variance of the policy gradient under any action-dependent baseline. Let us define z i:= ∇ θ log π θ (a which translates to meaning that different subsets of parameters strongly influence different action dimensions or factors. We note that this assumption is primarily for the theoretical analysis to be clean, and is not required to run the algorithm in practice. In particular, even without this assumption, the proposed baseline is bias-free. When the assumption holds, the optimal actiondependent baseline can be analyzed thoroughly. Some examples where these assumptions do hold include multi-agent settings where the policies are conditionally independent by construction, cases where the policy acts based on independent components BID0 of the observation space, and cases where different function approximators are used to control different actions or synergies BID23 BID24 without weight sharing. The optimal action-dependent baseline is then derived to be: DISPLAYFORM0 See Appendix B for the full derivation. Since the optimal action-dependent baseline is different for different action coordinates, it is outside the family of state-dependent baselines barring pathological cases. How much do we reduce variance over a traditional baseline that only depends on state? We use the following notation: DISPLAYFORM0 DISPLAYFORM1 Then, using Equation (Appendix C), we show the following improvement with the optimal action-dependent baseline: DISPLAYFORM2 See Appendices C and D for the full derivation. We conclude that the optimal action-dependent baseline does not degenerate into the optimal state-dependent baseline. Equation FORMULA1 states that the variance difference is a weighted sum of the deviation of the per-component score-weighted marginalized Q (denoted Y i) from the component weight (based on score only, not Q) of the overall aggregated marginalized Q values (denoted j Y j). This suggests that the difference is particularly large when the Q function is highly sensitive to the actions, especially along those directions that influence the gradient the most. Our empirical in Section 5 additionally demonstrate the benefit of action-dependent over state-only baselines. Using the previous theory, we now consider various baselines that could be used in practice and their associated computational cost. Marginalized Q baseline Even though the optimal state-only baseline is known, it is rarely used in practice BID1. Rather, for both computational and conceptual benefit, the choice of b(DISPLAYFORM0 which is the action-dependent analogue. In particular, when log probability of each policy factor is loosely correlated with the action-value function, then the proposed baseline is close to the optimal baseline. DISPLAYFORM1 This has the added benefit of requiring learning only one function approximator, for estimating Q(s t, a t), and implicitly using it to obtain the baselines for each action coordinate. That is, Q(s t, a t) is a function approximating samplesQ(s t, a t).Monte Carlo marginalized Q baseline After fitting Q π θ (s t, a t) we can obtain the baselines through Monte Carlo estimates: DISPLAYFORM2 where α j ∼ π θ (a i t |s t) are samples of the action coordinate i. In general, any function may be used to aggregate the samples, so long as it does not depend on the sample value a i t. For instance, for discrete action dimensions, the sample max can be computed instead of the mean. Mean marginalized Q baseline Though we reduced the computational burden from learning m functions to one function, the use of Monte Carlo samples can still be computationally expensive. In particular, when using deep neural networks to approximate the Q-function, forward propagation through the network can be even more computationally expensive than stepping through a fast simulator (e.g. MuJoCo). In such settings, we further propose the following more computationally practical baseline: DISPLAYFORM3 DISPLAYFORM4 t is the average action for coordinate i. The final practical algorithm for fully factorized policies is as follows. Require: number of iterations N, batch size B, initial policy parameters θ Initialize action-value function estimate Q π θ (s t, a t) ≡ 0 and policy DISPLAYFORM0, ∀t Perform a policy update step on θ using i (s t, a t) [Equation FORMULA10] Update action-value function approximation with current batch: Q π θ (s t, a t) end forComputing the baseline can be done with either proposed technique in Section 4.4. A similar algorithm can be written for general policies (Appendix E), which makes no assumptions on the conditional independence across action dimensions. Continuous control benchmarks Firstly, we present the of the proposed action-dependent baselines on popular benchmark tasks. These tasks have been widely studied in the deep reinforcement learning community BID1 BID4 BID17. The studied tasks include the hopper, half-cheetah, and ant locomotion tasks simulated in MuJoCo BID25.1 In addition to these tasks, we also consider a door opening task with a high-dimensional multi-fingered hand, introduced in BID16 to study the effectiveness of the proposed approach in high-dimensional tasks. FIG0 presents the learning curves on these tasks. We compare the action-dependent baseline with a baseline that uses only information about the states, which is the most common approach in the literature. We observe that the action-dependent baselines perform consistently better. A popular baseline parameterization choice is a linear function on a small number of non-linear features of the state BID1, especially for policy gradient methods. In this work, to enable a fair comparison, we use a Random Fourier Feature representation for the baseline BID15 BID17. The features are constructed as: y(x) = sin(1 ν P x + φ) where P is a matrix with each element independently drawn from the standard normal distribution, φ is a random phase shift in [−π, π) and, and ν is a bandwidth parameter. These features approximate the RKHS features under an RBF kernel. Using these features, the baseline is parameterized as b = w T y(x) where x are the appropriate inputs to the baseline, and w are trainable parameters. P and φ are not trained in this parameterization. Such a representation was chosen for two reasons: (a) we wish to have the same number of trainable parameters for all the baseline architectures, and not have more parameters in the action-dependent case (which has a larger number of inputs to the baseline); (b) since the final representation is linear, it is possible to accurately estimate the optimal parameters with a Newton step, thereby alleviating the from confounding optimization issues. For policy optimization, we use a variant of the natural policy gradient method as described in BID17. See Appendix G for further experimental details. Choice of action-dependent baseline form Next, we study the influence of computing the baseline by using empirical averages sampled from the Q-function versus using the mean-action of the action-coordinate for computing the baseline (both described in 4.4). In our experiments, as shown in Figure 2 we find that the two variants perform comparably, with the latter performing slightly better towards the end of the learning process. This suggests that though sampling from the Q-function might provide a better estimate of the conditional expectation in theory, function approximation from finite samples injects errors that may degrade the quality of estimates. In particular, sub-sampling from the Q-function is likely to produce better if the learned Q-function is accurate for a large fraction of the action space, but getting such high quality approximations might be hard in practice. High-dimensional action spaces Intuitively, the benefit of the action-dependent baseline can be greater for higher dimensional problems. We show this effect on a simple synthetic example called m-DimTargetMatching. The example is a one-step MDP comprising of a single state, S = {0}, an m-dimensional action space, A = R m, and a fixed vector c ∈ R m. The reward is given as the negative squared 2 loss of the action vector, r(s, a) = − a − c 2 2. The optimal action is thus to match Figure 2: Variants of the action-dependent baseline that use: (i) sampling from the Q-function to estimate the conditional expectation; (ii) Using the mean action to form a linear approximation to the conditional expectation. We find that both variants perform comparably, with the latter being more computationally efficient. the given vector by selecting a = c. The for the demonstrative example are shown in Table 1, which shows that the action-dependent baseline successfully improves convergence more for higher dimensional problems than lower dimensional problems. Due to the lack of state information, the linear baseline reduces to whitening the returns. The action-dependent baseline, on the other hand, allows the learning algorithm to assess the advantage of each individual action dimension by utilizing information from all other action dimensions. Additionally, this experiment demonstrates that our algorithm scales well computationally to high-dimensional problems. Table 1: Shown are the for the synthetic high-dimensional target matching task (5 seeds), for 12 to 2000 dimensional action spaces. At high dimensions, the linear feature action-dependent baseline provides notable and consistent variance reduction, as compared to a linear feature baseline, ing in around 10% faster convergence. For the corresponding learning curves, see Appendix G.Partially observable and multi-agent tasks Finally, we also consider the extension of the core idea of using global information, by studying a POMDP task and a multi-agent task. We use the blind peg-insertion task which is widely studied in the robot learning literature BID12. The task requires the robot to insert the peg into the hole (slot), but the robot is blind to the location of the hole. Thus, we expect a searching behavior to emerge from the robot, where it learns that the hole is present on the table and performs appropriate sweeping motions till it is able to find the hole. In this case, we consider a baseline that is given access to the location of the hole. We observe that a baseline with this additional information enables faster learning. For the multi-agent setting, we analyze a two-agent particle environment task in which the goal is for each agent to reach their goal, where their goal is known by the other agent and they have a continuous communication channel. Similar training procedures have been employed in recent related works BID9;. FIG2 shows that including the inclusion of information from other agents into the action-dependent baseline improves the training performance, indicating that variance reduction may be key for multi-agent reinforcement learning. An action-dependent baseline enables using additional signals beyond the state to achieve bias-free variance reduction. In this work, we consider both conditionally independent policies and general policies, and derive an optimal action-dependent baseline. We provide analysis of the variance DISPLAYFORM0 (a) Success percentage on the blind peg insertion task. The policy still acts on the observations and does not know the hole location. However, the baseline has access to this goal information, in addition to the observations and action, and helps to speed up the learning. By comparison, in blue, the baseline has access only to the observations and actions. reduction improvement over non-optimal baselines, including the traditional optimal baseline that only depends on state. We additionally propose several practical action-dependent baselines which perform well on a variety of continuous control tasks and synthetic high-dimensional action problems. The use of additional signals beyond the local state generalizes to other problem settings, for instance in POMDP and multi-agent tasks. In future work, we propose to investigate related methods in such settings on large-scale problems. We provide a derivation of the optimal state-dependent baseline, which minimizes the variance of the policy gradient estimate, and is based in (, Theorem 8). More precisely, we minimize the trace of the covariance of the policy gradient; that is, the sum of the variance of the components of the vectors. Recall the policy gradient expression with a state-dependent baseline: DISPLAYFORM0 Denote g to be the associated random variable, that is, ∇ θ η(π θ) = E ρπ,π [g]: DISPLAYFORM1 The variance of the policy gradient is: DISPLAYFORM2 Note that E [η(π θ)]) contains a bias-free term, by the score function argument, which then does not affect the minimizer. Terms which do not depend on b(s t) also do not affect the minimizer. DISPLAYFORM3 We derive the optimal action-dependent baseline, which minimizes the variance of the policy gradient estimate. First, we write out the variance of the policy gradient under any action-dependent baseline. Recall the following notations: we define z i:= ∇ θ log π θ (a i t |s t) and the component policy gradient: DISPLAYFORM0 Denote g i to be the associated random variables: DISPLAYFORM1 such that DISPLAYFORM2 Recall the following assumption: DISPLAYFORM3 which translates to meaning that different subsets of parameters strongly influence different action dimensions or factors. This is true in case of distributed systems by construction, and also true in a single agent system if different action coordinates are strongly influenced by different policy network channels. Under this assumption, we have: DISPLAYFORM4 DISPLAYFORM5 DISPLAYFORM6 where we denote the mean correction term DISPLAYFORM7, and thus does not affect the optimal value. The overall variance is minimized when each component variance is minimized. We now derive the optimal baselines b * i (s t, a −i t) which minimize each respective component. DISPLAYFORM8 + E ρπ,a DISPLAYFORM9 Having written down the expression for variance under any action-dependent baseline, we seek the optimal baseline that would minimize this variance. DISPLAYFORM10 The optimal action-dependent baseline is: DISPLAYFORM11 We now turn to quantifying the reduction in variance of the policy gradient estimate under the optimal baseline derived above. Let Var * (i g i) denote the variance ing from the optimal action-dependent baseline, and let Var(i g i) denote the variance ing from another baseline DISPLAYFORM0, which may be suboptimal or action-independent. Recall the notation: DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 Finally, define the variance improvement DISPLAYFORM4 Using these definitions, the variance can be re-written as: DISPLAYFORM5 Furthermore, the variance of the gradient with the optimal baseline can be written as DISPLAYFORM6 The difference in variance can be calculated as: DISPLAYFORM7 DISPLAYFORM8 Using the notation from Appendix C and working off of Equation FORMULA1, we have: DISPLAYFORM0 DISPLAYFORM1 E BASELINES FOR GENERAL ACTIONSIn the preceding derivations, we have assumed policy actions are conditionally independent across dimensions. In the more general case, we only assume that there are m factors a which altogether forms the action a t. Conditioned on s t, the different factors form a certain directed acyclic graphical model (including the fully dependent case). Without loss of generality, we assume that the following factorization holds: DISPLAYFORM2 where f (i) denotes the indices of the parents of the ith factor. Let D(i) denote the indices of descendants of i in the graphical model (including i itself). In this case, we can set the ith baseline to be b i (s t, a DISPLAYFORM3), where [m] = {1, 2, . . ., m}. In other words, the ith baseline can depend on all other factors which the ith factor does not influence. The overall gradient estimator is given by DISPLAYFORM4 In the most general case without any conditional independence assumptions, we have f (i) = {1, 2, . . ., i − 1}, and D(i) = {i, i + 1, . . ., m}. The above equation reduces to DISPLAYFORM5 The above analysis for optimal baselines and variance suboptimality transfers also to the case of general actions. The applicability of our techniques to general action spaces may be of crucial importance for many application domains where the conditional independence assumption does not hold up, such as language tasks and other compositional domains. Even in continuous control tasks, such as hand manipulation, and many other tasks where it is common practice to use conditionally independent factorized policies, it is reasonable to expect training improvement from policies without a full conditionally independence structure. Computing action-dependent baselines for general actions The marginalization presented in Section 4.4 does not apply for the general action setting. Instead, m individual baselines can be trained according to the factorization, and each of them can be fitted from data collected from the previous iteration. In the general case, this means fitting m functions b i (s t, a 1 t, . . ., a i−1 t), for i ∈ {1, . . ., m}. The ing method is described in Algorithm 2.There may also exist special cases like conditional independent actions, for which more efficient baseline constructions exist. A closely related example to the conditionally independent case is the case of block diagonal covariance structures (e.g. in multi-agent settings), where we may wish to instead learn an overall Q function and marginalize over block factors. Another interesting example to explore is sparse covariance structures. Algorithm 2 Policy gradient for general factorization policies using action-dependent baselines Require: number of iterations N, batch size B, initial policy parameters θ Initialize baselines allow us to smoothly interpolate between high-bias, low-variance estimates and low-bias, high-variance estimates of the policy gradient. These methods are based on the idea of being able to predict future returns, thereby bootstrapping the learning procedure. In particular, when using the value function as a baseline, we have DISPLAYFORM6 DISPLAYFORM7 is an unbiased estimator for V (s). GAE uses an exponential averaging of such temporal difference terms over a trajectory to significantly reduce the variance of the advantage at the cost of a small bias (it allows us to pick where we want to be on the bias-variance curve DISPLAYFORM8 Thus, the temporal difference error with the action dependent baselines is an unbiased estimator for the advantage function as well. This allows us to use the GAE procedure to further reduce variance at the cost of some bias. The following study shows that action-dependent baselines are consistent with TD procedures with their temporal differences being estimates of the advantage function. Our summarized in FIG3 suggests that slightly biasing the gradient to reduce variance produces the best , while high-bias estimates perform poorly. Prior work with baselines that utilize global information BID2 employ the high-bias variant. The here suggest that there is potential to further improve upon those by carefully studying the bias-variance trade-off. G HIGH-DIMENSIONAL ACTION SPACES: TRAINING CURVES Figure 5 shows the ing training curves for a synthetic high-dimensional target matching task, as described in Section 5. For higher dimensional action spaces (100 dimensions or greater), the action-dependent baseline consistently converges to the optimal solution 10% faster than the stateonly baseline. For reference, Figure 6 shows the of the original high-dimensional action space experiment. Due to a discovered issue in the TensorFlow version of rllab, which in training instability, both methods (action-dependent and state-dependent baselines) under-performed relative to the revised experiment (Figure 5), which uses a clean implementation based on the implementation referenced inRajeswaran et al. (2017b). The regression in training is most evident by the number of iterations required to solve the task; for instance, the old experiment could take as long as five times more iterations to solve the same task, even for a 12-dimensional task. Parameters: Unless otherwise stated, the following parameters are used in the experiments in this work: γ = 0.995, λ GAE = 0.97, kl desired = 0.025. Policies: The policies used are 2-layer fully connected networks with hidden sizes=.Initialization: the policy is initialized with Xavier initialization except final layer weights are scaled down (by a factor of 100x). Note that since the baseline is linear (with RBF features) and estimated with a Newton step, the initialization is inconsequential. Per-experiment configuration: The following parameters in TAB5 are for both state-only and action-dependent versions of the experiments. The m-DimTargetMatching experiments use a linear feature baseline. | Action-dependent baselines can be bias-free and yield greater variance reduction than state-only dependent baselines for policy gradient methods. | 618 | scitldr |
The cost of annotating training data has traditionally been a bottleneck for supervised learning approaches. The problem is further exacerbated when supervised learning is applied to a number of correlated tasks simultaneously since the amount of labels required scales with the number of tasks. To mitigate this concern, we propose an active multitask learning algorithm that achieves knowledge transfer between tasks. The approach forms a so-called committee for each task that jointly makes decisions and directly shares data across similar tasks. Our approach reduces the number of queries needed during training while maintaining high accuracy on test data. Empirical on benchmark datasets show significant improvements on both accuracy and number of query requests. A triumph of machine learning is the ability to predict with high accuracy. However, for the dominant paradigm, which is supervised learning, the main bottleneck is the need to annotate data, namely, to obtain labeled training examples. The problem becomes more pronounced in applications and systems which require a high level of personalization, such as music recommenders, spam filters, etc. Several thousand labeled emails are usually sufficient for training a good spam filter for a particular user. However, in real world email systems, the number of registered users is potentially in the millions, and it might not be feasible to learn a highly personalized spam filter for each of them by getting several thousand labeled data points for each user. One method to relieve the need of the prohibitively large amount of labeled data is to leverage the relationship between the tasks, especially by transferring relevant knowledge from information-rich tasks to information-poor ones, which is called multitask learning in the literature. We consider multitask learning in an online setting where the learner sees the data sequentially, which is more practical in real world applications. In this setting, the learner receives an example at each time round, along with its task identifier, and then predicts its true label. Afterwards, the learner queries the true label and updates the model(s) accordingly. The online multitask setting has received increasing attention in the machine learning community in recent years BID6 BID0 BID7 BID9 BID4 BID13 BID11. However, they make the assumption that the true label is readily available to be queried, which is impractical in many applications. Also, querying blindly can be inefficient when annotation is costly. Active learning further reduces the work of the annotator by selectively requesting true labels from the oracles. Most approaches in active learning for sequential and streambased problems adopt a measure of uncertainty / confidence of the learner in the current example BID5 BID3 BID12 BID8 BID1.The recent work by BID10 combines active learning with online multitask learning using peers or related tasks. When the classifier of the current task is not confident, it first queries its similar tasks before requesting a true label from the oracle, incurring a lower cost. Their learner gives priority to the current task by always checking its confidence first. In the case when the current task is confident, the opinions of its peers are ignored. This paper proposes an active multitask learning framework which is more humble, in a sense that both the current task and its peers' predictions are considered simultaneously using a weighted sum. We have a committee which makes joint decisions for each task. In addition, after the true label of a training sample is obtained, this sample is shared directly to similar tasks, which makes training more efficient. The problem formulation and setup are similar to BID11 BID10. Suppose we are given K tasks and the k-th task is associated with N k training samples. We consider each task to be a linear binary classification problem, but the extensions to multiclass or non-linear cases are straightforward. We use the good-old perceptron-based update rule in which the model for a given task is only updated when the prediction for that training example is in error. The data for task k is {x DISPLAYFORM0 D is the i-th instance from the k-th task, yk ∈ {−1, +1} is the corresponding label and D is the dimension of features. When the notation is clear from the context, we drop task index k and simply write ((x (i), k), y (i) ). We consider the online setting where the training example ((x (t), k), y (t) ) comes at round t. Denote {w DISPLAYFORM1 the set of weights learned for the K binary classifiers at round t. Also denote w ∈ R K×D the weight matrix whose k-th row is w k . The labelŷ (t) is predicted based on the sign of the output value from the model. Then the hinge loss of task k on the sample (( DISPLAYFORM2 at round t is given by DISPLAYFORM3 In addition, we also consider the losses of its peer tasks m (m = k) as DISPLAYFORM4 km indicates the loss incurred by using task m's knowledge / classifier to predict the label of task k's training sample. DISPLAYFORM5 km plays an important role in learning the similarities among tasks and hence the committee weights. Intuitively, two tasks should be more similar if one task's training samples can be correctly predicted using the other task's classifier. The goal of this paper is to achieve a high accuracy on the test data, and at the same time to issue as small a number of queries to the oracle as possible during training, by efficiently sharing and transferring knowledge among similar tasks. In this section we introduce our algorithm Active Multitask Learning with Committees (AMLC) as shown in Algorithm 1. This algorithm provides an efficient way for online multitask learning. Each task uses not only its own knowledge but also knowledge from other tasks, and shares training examples across similar tasks when necessary. The two main components of Algorithm 1 are described in Section 3.1 and 3.2. In Section 3.3, we compare AMLC with the state-of-the-art online multitask learning algorithm. We maintain and update a relationship matrix τ ∈ R K×K through the learning process. The k-th row of τ, denoted τ k, is the committee weight vector for task k, also referred to as committee for brevity. Element τ ij of the relationship matrix indicates the closeness or similarity between task i and task if P (t) = 1 then 10: Query true label y (t) and set DISPLAYFORM0 12: Update τ: DISPLAYFORM1 13: 20: end function j, and also the importance of task j in task i's committee in predicting. Given a sample ((x (t), k), y (t) ) at round t, the confidence of task k is jointly decided by its committee; namely, a weighted sum of confidences of all tasks, DISPLAYFORM2 DISPLAYFORM3 Each confidence is just the common confidence measure for perceptron, using distance from the decision boundary BID5. The prediction is done by taking the sign of the confidence value. The learner then makes use of this confidence value by drawing a sample P (t) from a Bernoulli distribution, to decide whether to query the true label of this sample. The larger p is, the more likely for P (t) to be 0, signifying greater confidence. The hyperparameter b controls the level of confidence that the current task has to have to not request the true label. The learner only queries the true label when the current task's committee turns out to be unconfident. Another binary variable M (t) is set to be 1 if task k makes a mistake. Subsequently, its weight vector is updated following the conventional perceptron scheme. The learner then updates the relationship matrix following a similar policy as in BID11 BID10 Table 1. Accuracy on test set and total number of queries during training over 10 random shuffles of the training examples. The 95% confidence level is provided after the average accuracy. The best performance is highlighted in bold. On Spam Detection, both PEER+Share and AMLC are highlighted because AMLC has a lower mean but also smaller variance. DISPLAYFORM4 km /λ). The hyperparameter C decides how much decrease happens on the weight given non-zero loss, and λ = K m=1 l (t) km. These new weights are then normalized to sum to 1. To further encourage data sharing and information transfer between similar tasks, after the true label is obtained, the learner also shares the data with similar tasks of task k, so that peer tasks can learn from this sample as well. Similar tasks are identified by having a larger weight than the current task in the committee. We set S (t) m = 1 to indicate task m is a similar task to k and thus the data is shared with it. The most related work to ours is active learning from peers (PEER) BID10. In this section we discuss the main difference between our method and theirs with some intuition. Firstly, we do not treat the task itself and its peer tasks separately. Instead, the final confidence of the current task is jointly decided using the confidences of all tasks, weighted by the committee weight vector. It is humble in a sense that it always considers its peer tasks' advice when making a decision. There are two main advantages of our approach. 1) For PEER, no updates happen and no knowledge is transferred when the current task itself is confident. This can in difficulties for the learner to recover from being blindly confident. Blind confidence happens when the classifier makes mistakes on training examples but with high confidence, especially in early stage of training when data are not enough. 2) Our method updates the committee weight vector while keeping m∈ [K] τ km = 1 instead of m∈ [K],m =k τ km = 1. It then becomes possible that the current task itself has an equal or lower influence than other tasks on the final prediction. This is more desirable because identical tasks should have equal weights, and informationpoor tasks should rely more on their information-rich peers when making predictions. Secondly, our algorithm enables the sharing of training data across similar tasks directly, after acquiring the true label of this data. Querying can be costly, and the best way to make use of the expensive label information is to share it. Assuming that all tasks are identical, the most productive algorithm would merge all data to learn a single classifier. PEER is not able to achieve this because each task is still trained independently, since tasks only have access to their own data. Though PEER indirectly accesses others' data through querying peer tasks, this sharing mechanism can be insufficient when tasks are highly similar. In the case that all tasks are identical, our algorithm converges to a relationship matrix with identical elements and eventually all tasks are trained on every example that has been queried. In this section, we evaluate our proposed algorithm on three benchmark datasets for multitask learning, and compare our performance with many baseline models. We set b = 1 for all the experiments and tune the value of C from 20 values using 10-fold cross validation. Unless otherwise specified, all other model parameters are chosen via 10-fold cross validation. Landmine Detection 1 consists of 19 tasks collected from different landmine fields. Each task is a binary classification problem: landmines (+) or clutter (-), and each example consists of 9 features. Spam Detection 2 consists of labeled training data: spam (+) or non-spam (-) from the inboxes of 15 users, and each user is considered as a single task. Sentiment Analysis 3 (Blitzer et al.) consists of product re- views from Amazon containing reviews from 22 domains. We consider each domain as a binary classification task: positive review (+) and negative review (-). Details about our training and test sets are shown in Appendix A. We compare the performance of 5 different models. Random does not use any measure of confidence. Namely, the probability of querying or not querying true label are equal. Independent uses the confidence which is purely computed form the weight vector of the current task. Obviously both Random and Independent have no knowledge transfer among tasks. PEER is the algorithm from BID10. AMLC (Active Multitask Learning with Committees) is our proposed method as shown in Algorithm 1. In addition, we also show the performance of PEER+Share, in which we simply add to PEER the data sharing mechanism as illustrated in section 3.2. Table 1 shows the accuracy on test set and the total number of queries (label requests) to oracles during training of five models. Each value is the average of 10 random shuffles of the training set. The 95% confidence level is also shown. Notice that our re-implementation of PEER achieves similar performance on the Landmine and Spam datasets but seems to perform worse on Sentiment. The reason is that we are using a different representation of the training examples. We use the default bag-of-words representation coming with the dataset and there are approximately 2.9M features. The highlighted values illustrate the best performance across all models. On Spam Detection, AMLC is also highlighted because it is more confident about its accuracy even though the actual value is slightly lower than PEER+Share. It can be seen that our proposed methods (PEER+Share and AMLC) significantly outperform the the others. PEER has better performance compared to Random and Independent but still behaves worse than PEER+Share and AMLC. It can be shown that simply adding data sharing can improve both accuracy and number of queries used during training. The only exception is on Landmine Detection, where PEER+Share requests more queries than PEER. Though simply adding data sharing in improvement, after learning with joint decisions in AMLC, we observe further drastic decrease on the number of queries, while maintaining a high accuracy. Another goal of active multitask learning is to efficiently make use of the labels. In order to evaluate this, we give each model a fixed number of query budget and the training process is ended after the budget is exhausted. We show three plots (one for each dataset) in FIG1. Based on the difficulty of learning from each dataset, we choose different budgets to evaluate (up to 10%, 30% and 30% of the total training examples for Landmine, Spam and Sentiment respectively). We can see that given a limited number of query budgets, AMLC outperforms all models on all three datasets, as a of it encouraging more knowledge transfer among tasks. It is worth noting that the Landmine dataset is quite unbalanced (high proportion of negative labels), and PEER+Share and AMLC can achieve high accuracy with extremely limited number of queries. However, the classifier learned by PEER+Share is unconfident and thus it keeps requesting true labels in the following training process. We propose a new active multitask learning algorithm that encourages more knowledge transfer among tasks compared to the state-of-the-art models, by using joint decision / prediction and directly sharing training examples with true labels among similar tasks. Our proposed methods achieve both higher accuracy and lower number of queries on three benchmark datasets for multitask learning problems. Future work includes theoretical analysis of the error bound and comparison with those of the baseline models. Another interesting direction is to handle unbalanced task data. In other words, one task has much more / less training data than the others. | We propose an active multitask learning algorithm that achieves knowledge transfer between tasks. | 619 | scitldr |
Detection of photo manipulation relies on subtle statistical traces, notoriously removed by aggressive lossy compression employed online. We demonstrate that end-to-end modeling of complex photo dissemination channels allows for codec optimization with explicit provenance objectives. We design a lightweight trainable lossy image codec, that delivers competitive rate-distortion performance, on par with best hand-engineered alternatives, but has lower computational footprint on modern GPU-enabled platforms. Our show that significant improvements in manipulation detection accuracy are possible at fractional costs in bandwidth/storage. Our codec improved the accuracy from 37% to 86% even at very low bit-rates, well below the practicality of JPEG (QF 20). dard JPEG compression, and we extended it to support trainable codecs. We show a generic version of the updated model in Fig. 1 with highlighted potentially trainable elements. In this study, we fixed the camera model, and jointly optimize the FAN and a deep compression network (DCN). We describe the design of our DCN codec, and its pre-training protocol below. Our DCN model follows the general auto-encoder architecture proposed by , but uses different quantization, entropy estimation and entropy coding schemes (Section 3.2). The model is fully convolutional, and consists of 3 sub-sampling (stride-2) convolutional layers, and 3 residual blocks in between (Fig. 2). We do not use any normalization layers (such as GDN), and rely solely on a single trainable scaling factor. Distribution shaping occurs organically thanks to entropy regularization (see Fig. A.3b in the appendix). The decoder mirrors the encoder, and implements up-sampling using sub-pixel convolutions (combination of convolutional and depth-to-space layers). We experimented with different variants of latent representation quantization, eventually converging on soft-quantization with a fixed codebook of integers with a given maximal number of bits per feature (bpf). We used a 5-bpf uniform codebook (M = 32 values from -15 to 16). We show the impact of codebook size in the appendix (Fig. A.3a). The model is trained to minimize distortion between the input and reconstructed images regularized by entropy of the latent representation: where X is the input image, and E, Q, and D denote the encoder, quantization, and decoder, respectively. We used a simple L 2 loss in the RGB domain as the distortion measure d(·, ·), a differentiable soft estimate of entropy H (Section 3.2), and SSIM as the validation metric. We developed our own quantization and entropy estimation mechanism, because we found existing approaches unnecessarily complicated and/or lacking in accuracy. Some of the most recent solutions include: addition of uniform random noise to quantized samples and non-parametric entropy modeling by a fitted piece-wise linear model (Ballé et al., 2016); differentiable entropy upper bound with a uniform random noise component ; regularization by penalizing norm of quantized coefficients and differences between spatial neighbors ; PixelCNN for entropy estimation and context modeling . Our approach builds upon the soft quantization used by , but is extended to address numerical stability problems, and allow for accurate entropy estimation. Let z be a vectorized latent representation Z of N images, i.e.: z k = z n,i,j,c where n, i, j, c advance sequentially along an arbitrary memory layout (here image, width, height, channel). quantization asz = Wc. Hard quantization replaces an input value with the closest available codeword, and corresponds to a rounding operation performed by the image codec. Soft quantization is a differentiable relaxation, which uses a linear combination of all code-words -as specified by the weight matrix. A detailed comparison of both quantization modes, along with an illustration of potential numerical pitfalls, can be observed in the top row of Fig. A.1 in the appendix. The hard and soft quantization are used in the forward and backward passes, respectively. In Tensorflow, this can be implemented as z = tf.stop gradient(ẑ -z) +z. The weights for individual code-words in the mixture are computed by applying a kernel κ to the distances between the values and the code-words, which can be organized into a distance matrix D: The most commonly used implementations use a Gaussian kernel: which suffers from numerical problems for edge cases overflowing the codebook range (see Fig. A .1 top row in the 4-th and 5-th columns). To alleviate these problems, we adopt a t-Student kernel: which behaves much better in practice. We do not normalize the kernels, and ensure correct proportions of the weights by numerically normalizing rows of the weight matrix. We estimate entropy of the quantized values by summing the weight matrix along the sample dimension, which yields an estimate of the histogram w.r.t. codebook entries (comparison with an actual histogram is shown in Fig. A. 3):h This allows to estimate the entropy of the latent representation by simply: We assess the quality of the estimate both for synthetic random numbers (1,000 numbers sampled from Laplace distributions of various scales) and an actual latent representation of 128 × 128 px RGB image patches sampled from the clic test set (see Section 3.5 and examples in Fig. 4a). For the random sample, we fixed the quantization codebook to integers from −5 to 5, and performed the experiment numerically. For the real patches, we fed the images through a pre-trained DCN model (a medium-quality model with 32 feature channels; 32-C) and used the codebook embedded in the model (integers from −15 to 16). Fig. 3 shows the entropy estimation error (both absolute and relative) and scatter plots of real entropies vs. their soft estimates using the Gaussian and t-Student kernels. It can be observed that the t-Student kernel consistently outperforms the commonly used Gaussian. The impact of the kernels' hyperparameters on the relative estimation error is shown in Fig. A.2. The best combination of kernel parameters (v = 50, γ = 25) is highlighted in red and used in all subsequent experiments. Published as a conference paper at ICLR 2020 Figure 4: Example images from the considered clic, kodak and raw test sets (512×512 px). We used a state-of-the-art entropy coder based on asymmetric numeral systems and its finite state entropy (FSE) implementation . For simplicity and computational efficiency, we did not employ a context model 1 and instead encode individual feature channels (channel EC). Bitrate savings w.r.t. global entropy coding (global EC) vary based on the model, image size and content. For 512 × 512 px images, we observed average savings of ≈ 12%, but for very small patches (e.g., 128 px), it may actually in overhead (Tab. A.2). This can be easily addressed with a flag that switches between different compression modes, but we leave practical design of the format container for future work. We use a simple structure of the bit-stream, which enables variable-length, per-channel entropy coding with random channel access (Tab. A.1). Such an approach offers flexibility and scalability benefits, e.g.: it allows for rapid analysis of selected feature channels ; it enables trivial parallel processing of the channels to speed up encoding/decoding on modern multi-core platforms. We pre-trained the DCN model in isolation and minimize the entropy-regularized L 2 loss (equation 1) on mixed natural images (MNI) from 6 sources: native camera output from the RAISE and MIT-5k datasets ; photos from the Waterloo exploration database ; HDR images ; computer game footage ; city scapes ; and the training sub-set of the CLIC professional dataset . In total, we collected 32,000 square crops ranging from 512 × 512 to 1024 × 1024 px, which were subsequently down-sampled to 256 × 256 px and randomly split into training and validation subsets. We used three augmentation strategies: we trained on 128 × 128 px patches randomly sampled in each step; we flip the patches vertically and/or horizontally with probability 0.5; and we apply random gamma correction with probability 0.5. This allowed for reduction of the training set size, to ≈10k images where the performance saturates. We used batches of 50 images, and learning rate starting at 10 −4 and decaying by a factor of 0.5 every 1,000 epochs. The optimization algorithm was Adam with default settings (as of Tensorflow 1.12). We train by minimizing MSE (L 2 loss) until convergence of SSIM on a validation set with 1,000 images. We control image quality by changing the number of feature channels. We consider three configurations for low, medium, and high quality with 16, 32, and 64 channels, respectively. Standard Codecs: As hand-crafted baselines, we consider three codecs: JPEG from the libJPEG library via the imageio interface, JPEG2000 from the OpenJPEG library via the Glymur interface, and BPG from its reference implementation . Since our model uses full-resolution RGB channels as input, we also use full-resolution chrominance channels whenever possible (JPEG and BPG). To make the comparison as fair as possible, we measure effective payload of the codecs. For the JPEG codec, we manually seek byte markers and include only the Huffman tables and Huffman-coded image data. For JPEG2000, we add up lengths of tile-parts, as reported by jpylyzer. For BPG, we seek the picture data length marker. Rate-distortion Trade-off: We used 3 datasets for the final evaluation (Fig. 4): (raw) 39 images with native camera output from 4 different cameras ; (clic) 39 images from the professional validation subset of; (kodak) 24 images from the standard Kodak dataset. All test images are of size 512 × 512px, and were obtained either by cropping directly without re-sampling (raw, kodak) or by resizing a central square crop (clic). We measured PSNR and SSIM using the scikit-image package and MS-SSIM using sewar. We used the default settings and only set the data range to. The values are computed as the average over RGB channels. The bit-rate for hand-crafted codecs was computed using the effective payload, as explained above. For the DCN codec, we completely encoded and decoded the images (Section A.1). Fig. 5 shows rate-distortion curves (SSIM vs. effective bpp) for the clic and raw datasets (see appendix for additional ). We show 4 individual images (Fig. 4) and averages over the respective datasets. Since standard quality control (e.g., quality level in JPEG, or number of channels in DCN) leads to a wide variety of bpps, we fit individual images to a parametric curve f (x) = (1 + e −αx β +γ) −1 − δ and show the averaged fits. It can be observed that our DCN model delivers significantly better than JPEG and JPEG2000, and approaches BPG. Processing Time: We collected DCN processing times for various platforms (Table 1), including desktops, servers, and low-power edge AI. We report network inference and complete encoding/decoding times on 512 × 512 px and 1920 × 1080 px images, averaged over the clic dataset (separate runs with batch size 1) and obtained using an unoptimized Python 3 implementation 2. On GPUenabled platforms, the inference time becomes negligible (over 100 fps for 512 × 512 px images, and over 20 fps for 1920 × 1080 px images on GeForce 1080 with a 2.6 GHz Xeon CPU), and entropy coding starts to become the bottleneck (down to 13 and 5 fps, respectively). We emphasize that the adopted FSE codec is one of the fastest available, and significantly outperforms commonly used arithmetic coding . If needed, channel EC can be easily parallelized, and the ANS codec could be re-implemented to run on GPU as well (Weißenberger &). As a reference, we measured the processing times of 1920 × 1080 px images for the standard codecs on the i7-7700 CPU @ 3.60GHz processor. JPEG coding with 1 thread takes between 0.061 s (Q=30) and 0.075 s (Q=90) (inclusive of writing time to RAM disk; using the pillow library). JPEG 2000 with 1 thread takes 0.61 s regardless of the quality level (inclusive of writing time to RAM disk; glymur library). BPG with 4 parallel threads takes 2.4 s (Q=1), 1.25 s (Q=20) and 0.72 s (Q=30) (inclusive of PNG reading time from RAM disk; bpgenc command line tool). While not directly comparable and also not optimized, some state-of-the-art deep learned codecs require minutes to process even small images, e.g., 5-6 min for 768 × 512 px images from the Kodak dataset reported by. The fastest state-of-the-art learned codec is reported to run at ≈100 fps on images of that size on a GPU-enabled desktop computer . We consider the standard photo manipulation detection setup where an adversary uses contentpreserving post-processing, and a forensic analysis network (FAN) needs to identify the applied operation or confirm that the image is unprocessed. We use a challenging real-world setup, where the FAN can analyze images only after transmission through a lossy dissemination channel (Fig. 1). In such conditions, forensic analysis is known to fail . We consider several versions of the channel, including: standard JPEG compression, pre-trained DCN codecs, and trainable DCN codecs jointly optimized along with the FAN. We analyze 128 × 128 px patches, and don't use down-sampling to isolate the impact of the codec. We consider 6 benign post-processing operations which preserve image content, but change lowlevel traces that can reveal a forgery. Such operations are commonly used either during photo manipulation or as a masking step afterwards. We include: (a) sharpening -implemented as an unsharp mask operator applied to the luminance channel in the HSV color space; (b) resampling involving successive down-sampling and up-sampling using bilinear interpolation and scaling factors 1:2 and 2:1; (c) Gaussian filtering with a 5 × 5 filter and standard deviation 0.83; (d) JPEG compression using a differentiable dJPEG model with quality level 80; (e) AWGN noise with standard deviation 0.02; and (f) median filtering with a 3 × 3 kernel. The operations are difficult to distinguish visually from native camera output -even without lossy compression (Fig. 7). The FAN is a state-of-the-art image forensics CNN with a constrained residual layer . We used the model provided in the toolbox , and optimize for classification (native camera output + 6 post-processing classes) of RGB image patches. In total, the model has 1.3 million parameters. We jointly train the entire workflow and optimize both the FAN and DCN models. Let M c denote the c-th manipulation (including identity for native camera output), and F denote the output of the FAN with F c being the probability of the corresponding manipulation class c. Let also C denote the adopted lossy compression model, e.g., D • Q • E for the DCN. We denote sRGB images rendered by the camera ISP as X. The FAN model is trained to minimize a cross-entropy loss: and the DCN to minimize its combination with the original fidelity/entropy loss (equation 1): where λ c is used to control the balance between the objectives (we consider values from 10 −3 to 1). We start from pre-trained DCN models (Section 3.4). The FAN model is trained from scratch. When JPEG compression was used in the channel, we used the differentiable dJPEG model from the original study , but modified it to use hard-quantization in the forward pass to ensure equivalent to libJPEG. We used quality levels from 10 to 95 with step 5. We followed the same training protocol as , and trained on native camera output (NCO). We used the DNet pipeline for Nikon D90, and randomly sampled 128 × 128 px RGB patches from 120 full-resolution images. The remaining 30 images were used for validation (we sampled 4 patches per image to increase diversity). We used batches of 20 images, and trained for 2,500 epochs with learning rate starting at 10 −4 and decaying by 10% every 100 epochs. For each training configuration, we repeated the experiment 3-5 times to validate training stability. We summarize the obtained in Fig. 8 which shows the trade-off between effective bpp (rate), SSIM (distortion), and manipulation detection accuracy. The figure compares standard JPEG compression (diamond markers), pre-trained basic DCN models (connected circles with black border), and fine-tuned DCN models for various regularization strenghts λ c (loose circles with gray border). Fine-tuned models are labeled with a delta in the auxiliary metric (also encoded as marker size and color), and the text is colored in red/green to indicate deterioration or improvement. Fig. 8a shows how the manipulation detection capability changes with effective bitrate of the codec. We can make the following observations. Firstly, JPEG delivers the worst trade-off and exhibits irregular behavior. The performance gap may be attributed to: (a) better image fidelity for the DCN codec, which retains more information at any bitrate budget; (b) presence of JPEG compression as one of the manipulations. The latter factor also explains irregular drops in accuracy, which coincide with the quality level of the manipulation and unfavorable multiples of the quantization tables (see also Fig. B.1). Secondly, we observe that fine-tuning the DCN model leads to a sudden increase in payload requirements, minor improvement in quality, and gradual increase in manipulation detection accuracy (as λ c → 0). We obtain significant improvements in accuracy even for the lowest JPG JPG JPG JPG JPG (70 JPG JPG JPG JPG (quality level (37% → 85%; at such low bitrates JPEG stays below 30%). Interestingly, we don't see major differences in payload between the fine-tuned models, which suggests that qualitative differences in encoding may be expected beyond simple inclusion of more information. Fig. 8b shows the same from a different perspective, and depicts the standard rate-distortion trade-off supplemented with auxiliary accuracy information. We can observe that DCN fine-tuning moves the model to a different point (greater payload, better quality), but doesn't seem to visibly deteriorate the rate-distortion trade-off (with the exception of the smallest regularization λ c = 0.001 which consistently shows better accuracy and worse fidelity). To explain the behavior of the models, we examine frequency attenuation patterns. We compute FFT spectra of the compressed images, and divide them by the corresponding spectra of uncompressed images. We repeat this procedure for all images in our raw test set, and average them to show consistent trends. The are shown in Fig. 9 for the pre-trained DCN model (1st column) and fine-tuned models with decreasing λ c (increasing emphasis on accuracy). The plots are calibrated to show unaffected frequencies as gray, and attenuated/emphasized frequencies as dark/bright areas. The pre-trained models reveal clear and gradual attenuation of high frequencies. Once the models are plugged in to the dissemination workflow, high frequencies start to be retained, but it does not suffice to improve manipulation detection. Increasing importance of the cross-entropy loss gradually changes the attenuation patterns. Selection of frequencies becomes more irregular, and some bands are actually emphasized by the codec. The right-most column shows the most extreme configuration where the trend is clearly visible (the outlier identified in quantitative analysis in Section 4.3). The changes in codec behavior generally do not introduce visible differences in compressed images (as long as the data distribution is similar, see Section 5). We show an example image (raw test set), its compressed variants (low-quality, 16-C), and their corresponding spectra in Fig. 10. The spectra follow the identified attenuation pattern (Fig. 9). The compressed images do not reveal any obious artifacts, and the most visible change seems to be the jump in entropy, as predicted in Section 4.3. While the proposed approach can successfully facilitate pre-screening of photographs shared online, further research is needed to improve model generalization. We observed that the fine-tuning procedure tends bias the DCN/FAN models towards the secondary image dataset, in our case the native camera output (NCO). The baseline DCN was pre-trained on mixed natural images (MNI) with extensive augmentation, leading to competitive on all test sets. However, fine-tuning was performed on NCO only. Characteristic pixel correlations, e.g., due to color interpolation, bias the codec and lead to occasional artifacts in MNIs (mostly in the clic test set; see Appendix B), and deterioration of the rate-distortion trade-off. The problem is present regardless of λ c, which suggests issues with the fine-tuning protocol (data diversity) and not the forensic optimization objective. We ran additional experiments by skipping photo acquisition and fine-tuning directly on MNI from the original training set (subset of 2,500 RGB images). We observed the same behavior (see Appendix C), and the optimized codec was artifact-free on all test sets. (Although, due to a smaller training set, the model loses some of its performance; cf. MNI in Fig. A.6 .) However, the FANs generalized well only to clic and kodak images. The originally trained FANs generalized reasonably well to different NCO images (including images from other 3 cameras) but not to clic or kodak. This confirms that existing forensics models are sensitive to data distribution, and that further work will be needed to establish more universal training protocols (see detailed discussion in Appendix D). Short fine-tuning is known to help , and we leave this aspect for future work. We are also planning to explore new transfer learning protocols . Generalization should also consider other forensic tasks. We optimized for manipulation detection, which serves as a building block for more complex problems, like processing history analysis or tampering localization (; ; ; a). However, additional pre-screening may also be needed, e.g., analysis of sensor fingerprints , or identification of computer graphics or synthetic content (b). Our study shows that lossy image codecs can be explicitly optimized to retain subtle low-level traces that are useful for photo manipulation detection. Interestingly, simple inclusion of high frequencies in the signal is insufficient, and the models learns more complex frequency attenuation/amplification patterns. This allows for reliable authentication even at very low bit-rates, where standard JPEG compression is no longer practical, e.g., at bit-rates around 0.4 bpp where our DCN codec with lowquality settings improved manipulation detection accuracy from 37% to 86%. We believe the proposed approach is particularly valuable for online media platforms (e.g., Truepic, or Facebook), who need to pre-screen content upon reception, but need to aggressively optimize bandwidth/storage. The standard soft quantization with a Gaussian kernel works well for rounding to arbitrary integers, but leads to numerical issues for smaller codebooks. Values significantly exceeding codebook endpoints have zero affinity to any of the entries, and collapse to the mean (i.e., ≈ 0 in our case; Fig. A.1a). Such issues can be addressed by increasing numerical precision, sacrificing accuracy (due to larger kernel bandwidth), or adding explicit conditional statements in the code. The latter approach is inelegant and cumbersome in graph-based machine learning frameworks like Tensorflow. We used a t-Student kernel instead and increased precision of the computation to 64-bits. This doesn't solve the problem entirely, but successfully eliminated all issues that we came across in our experiments, and further improved our entropy estimation accuracy. Fig. A.2 shows entropy estimation error for Laplace-distributed random values, and different hyper-parameters of the kernels. We observed the best for a t-Student kernel with 50 degrees of freedom and bandwidth γ = 25 (marked in red). This kernel is used in all subsequent experiments. We experimented with different codebooks and entropy regularization strengths. Fig. A.3a shows how the quantized latent representation (QLR) changes with the size of the codebook. The figures also compare the actual histogram with its soft estimate (equation 6). We observed that the binary codebook is sub-optimal and significantly limits the achievable image quality, especially as the number of feature channels grows. Adding more entries steadily improved quality and the codebook with M = 32 entires (values from -15 to 16) seemed to be the point of diminishing returns. Our entropy-based regularization turned out to be very effective at shaping the QLR (Fig. A.3b) and dispensed with the need to use other normalization techniques (e.g., GDN). We used only a single scalar multiplication factor responsible for scaling the distribution. All baseline and finetuned models use λ H = 250 (last column). Fig. A.4 visually compares the QLRs of our baseline low-quality codec (16 feature channels) with weak and strong regularization. We used a rudimentary bit-stream structure with the essential meta-data and markers that allow for successful decoding (Tab. A.1). Feature channels are entropy-coded independently (we refer to this approach as channel EC), and can be accessed randomly after decoding a lookup Figure A.6: Rate-distortion performance for standard codecs and all DCN versions including the pre-trained baselines (b) and 3 fine-tuned models with the the weakest (λ c = 1) and the strongest emphasis on manipulation detection (λ c = 0.005 and 0.001). The latter incure only fractional cost in payload/quality but bring significant benefits for manipulation detection. The top (bottom) rows show for SSIM (MS-SSIM), respectively, and include DCN models fine tuned on native camera output (raw) and mixed natural images (clic). Fig. 8 visualizes the trade-offs in image compression and forensic analysis performance. Here we show direct impact of image compression and fine-tuning settings on the achievable manipulation detection accuracy and its variations (Fig. B.1). For the JPEG codec, we observe nearly perfect manipulation detection for the highest quality level (QF=95), and a steady decline starting immediately below. The sudden drop in accuracy corresponds to the quality level of JPEG as one of the manipulations (QF=80). For DCN models, we clearly see steady improvement of fine-tuning w.r.t. the baseline models (on the right). Interestingly, the high-quality model shows a slight decline at first. Qualitative Analysis: The learned frequency attenuation/amplification patterns for all of the considered quality levels are shown in Fig. B.2. The visualizations were computed in the FFT domain and show the relative magnitude of individual frequencies w.r.t. original uncompressed images (averaged over all test images). In all cases, we observe complex behavior beyond simple inclusion of high-frequency content. The pattern seem to have a stable trajectory, despite independent runs of the experiment with different regularization strengths λ c. The same patterns will also be visible in individual image spectra (Fig. B.4 -Fig. B.6). Generalization and Imaging Artifacts: While our baseline DCN models were pre-trained on a large and diverse training set, the fine-tuning procedure relied on the complete model of photo acquisition and dissemination. Photo acquisition with digital cameras yields characteristic imaging artifacts, e.g., due to color filtering and interpolation. This leads to a characteristic distribution of native camera output (NCO), and ultimately biases the codec. Fig. B.3 shows the differences in SSIM between the baseline models and the models fine-tuned with a very weak cross-entropy objective (leading to no improvement in manipulation detection accuracy). For NCO (raw test set), we observe improvement of image quality (and corresponding increase in bitrate). For the kodak set, the quality remains mostly unaffected (with an increased bitrate). On the clic set, we observe minor quality loss, and occasional artifacts (see examples in Fig. B.4). In Fig. B.4 -Fig. B.6 we collected example images from all test sets (clic, kodak, and raw) and compress them with baseline and fine-tuned models. The images are ordered by SSIM deterioration due to weak fine-tuning (quality loss without gains in accuracy; Fig. B. 3) -the worst cases are shown at the top. (Note that some of the artifacts are caused by JPEG encoding of the images embedded in the PDF, and some geometric distortions were introduced by imperfect scaling in matplotlib.) In the first clic image (1st row in Fig. B.4), we can see color artifacts along the high-contrast wires. In the second image, we see distortions in the door blinds, and a subtle shift in the hue of the bike. For the remaining two images, SSIM remains the same or better and we do not see any artifacts. In the kodak set, the worst image quality was observed for kodim05 (1st row in Fig. B .5), but we don't see any artifacts. Figure C.1: Visualization of the rate-distortion-accuracy trade-off on the clic dataset after fine-tuning on mixed natural images. As discussed in Section 5, we ran additional experiments by skipping photo acquisition and finetuning directly on mixed natural images (MNI) -a subset of the original DCN training set (2,500 images). Images in this dataset tend to have more details and depict objects at a coarser scale, since they were all down-sampled to 256 × 256 px (from various original sizes). This required adjusting manipulation strength to maintain visual similarity between photo variations. In particular, we used weaker sharpening, Gaussian filtering with a smaller standard deviation (0.5), down&up-sampling via 75% of the image size (instead of 50%), Gaussian noise with standard deviation 0.012, and JPEG quality level 90. We fine-tuned for 600 epochs. We summarize the obtained in Fig C. 1 using images from the clic test set. In this experiment, the gap in manipulation detection accuracy between JPEG and baseline DCN has disappeared, except for remaining sudden drops at selected JPEG quality levels (corresponding to the manipulation quality factor 90). We still observe significant improvement for fine-tuned DCN models, but here it tends to saturate around 86%, which might explain negligible improvement of the high-quality 64-C model. By inspecting confusion matrices, we see most of the confusion between native, sharpen and awgn classes where the differences are extremely subtle. The fine-tuned DCN models remain close to the baseline rate-distortion behavior. Interestingly, except for the weakest regularization (λ c = 0.001), all fine-tuned models seem to be equivalent (w.r.t. the trade-off). We did not observe any obvious artifacts, even in the most aggressive models. The only image with deteriorated SSIM is the alejandro-escamilla-6 image from clic, which was consistently the most affected outlier in nearly all fine tuned models for both NCO and MNI (1st row in Fig. C.2). In some replications it actually managed to improve, e.g., for the shown cases with λ c = 0.005 and 0.001. However, we don't see any major differences between these variations. Visualization of frequency attenuation patterns (Fig. C.3) shows analogous behavior, but the changes are more subtle on MNI. We included additional difference plots w.r.t. baseline and weakly finetuned models, where the changes are better visible. On NCO, due to less intrinsic high-frequency content, the behavior is still very clear (cf. bottom part of Fig. C.3). Published as a conference paper at ICLR 2020 In this study, we considered two broad classes of images: native camera output (NCO) and mixed natural images (MNI) which exhibit significant differences in pixel distribution. For DCN pretraining, we relied on a large MNI dataset with images down-sampled to 256 × 256 px (Section 3.4). Fine-tuning was then performed on either NCO from a single camera (Nikon D90; Section 4) or a smaller sub-set of the original training MNI (2,500 images; Appendix C). Finally, we considered three test sets: raw with NCO from 4 different cameras; clic and kodak with MNI. We observed that the FAN models tend to have limited generalization capabilities to images with a different pixel distribution. We ran additional experiments to quantify this phenomenon, and show the obtained in Fig. D.1 where we compare test vs validation accuracy for various test sets (we also included a version of the clic set resized to 256×256 px). In the top row, we show for models trained on NCO from a single camera. We can see imperfect, but reasonable generalization to the output of various cameras (red markers). Once the data distribution changes, the performance drops significantly. Analogously, FAN models trained on MNI generalize well to other MNI datasets (clic and kodak), but fail on NCO. We see an additional bias towards images down-sampled to the same resolution as the training data (compare clic vs clic-256 images), but here the difference is small and consistent -between 5.2 -6% based on a linear fit to the data. | We learn an efficient lossy image codec that can be optimized to facilitate reliable photo manipulation detection at fractional cost in payload/quality and even at low bitrates. | 620 | scitldr |
Recurrent Neural Networks have long been the dominating choice for sequence modeling. However, it severely suffers from two issues: impotent in capturing very long-term dependencies and unable to parallelize the sequential computation procedure. Therefore, many non-recurrent sequence models that are built on convolution and attention operations have been proposed recently. Notably, models with multi-head attention such as Transformer have demonstrated extreme effectiveness in capturing long-term dependencies in a variety of sequence modeling tasks. Despite their success, however, these models lack necessary components to model local structures in sequences and heavily rely on position embeddings that have limited effects and require a considerable amount of design efforts. In this paper, we propose the R-Transformer which enjoys the advantages of both RNNs and the multi-head attention mechanism while avoids their respective drawbacks. The proposed model can effectively capture both local structures and global long-term dependencies in sequences without any use of position embeddings. We evaluate R-Transformer through extensive experiments with data from a wide range of domains and the empirical show that R-Transformer outperforms the state-of-the-art methods by a large margin in most of the tasks. Recurrent Neural Networks (RNNs) especially its variants such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) have achieved great success in a wide range of sequence learning tasks including language modeling, speech recognition, recommendation, etc (; ; ; ;). Despite their success, however, the recurrent structure is often troubled by two notorious issues. First, it easily suffers from gradient vanishing and exploding problems, which largely limits their ability to learn very long-term dependencies . Second, the sequential nature of both forward and backward passes makes it extremely difficult, if not impossible, to parallelize the computation, which dramatically increases the time complexity in both training and testing procedure. Therefore, many recently developed sequence learning models have completely jettisoned the recurrent structure and only rely on convolution operation or attention mechanism that are easy to parallelize and allow the information flow at an arbitrary length. Two representative models that have drawn great attention are Temporal Convolution Networks(TCN) and Transformer . In a variety of sequence learning tasks, they have demonstrated comparable or even better performance than that of RNNs (; ;). The remarkable performance achieved by such models largely comes from their ability to capture long-term dependencies in sequences. In particular, the multi-head attention mechanism in Transformer allows every position to be directly connected to any other positions in a sequence. Thus, the information can flow across positions without any intermediate loss. Nevertheless, there are two issues that can harm the effectiveness of multi-head attention mechanism for sequence learning. The first comes from the loss of sequential information of positions as it treats every position identically. To mitigate this problem, Transformer introduces position embeddings, whose effects, The illustration of one layer of R-Transformer. There are three different networks that are arranged hierarchically. In particular, the lower-level is localRNNs that process positions in a local window sequentially (This figure shows an example of local window of size 3); The middle-level is multi-head attention networks which capture the global long-term dependencies; The upper-level is Position-wise feedforward networks that conduct non-linear feature transformation. These three networks are connected by a residual and layer normalization operation. The circles with dash line are the paddings of the input sequence however, have been shown to be limited . In addition, it requires considerable amount of efforts to design more effective position embeddings or different ways to incorporate them in the learning process . Second, while multi-head attention mechanism is able to learn the global dependencies, we argue that it ignores the local structures that are inherently important in sequences such as natural languages. Even with the help of position embeddings, the signals at local positions can still be very weak as the number of other positions is significantly more. To address the aforementioned limitations of the standard Transformer, in this paper, we propose a novel sequence learning model, termed as R-Transformer. It is a multi-layer architecture built on RNNs and the standard Transformer, and enjoys the advantages of both worlds while naturally avoids their respective drawbacks. More specifically, before computing global dependencies of positions with the multi-head attention mechanism, we firstly refine the representation of each position such that the sequential and local information within its neighborhood can be compressed in the representation. To do this, we introduce a local recurrent neural network, referred to as LocalRNN, to process signals within a local window ending at a given position. In addition, the LocalRNN operates on local windows of all the positions identically and independently and produces a latent representation for each of them. In this way, the locality in the sequence is explicitly captured. In addition, as the local window is sliding along the sequence one position by one position, the global sequential information is also incorporated. More importantly, because the localRNN is only applied to local windows, the aforementioned two drawbacks of RNNs can be naturally mitigated. We evaluate the effectiveness of R-Transformer with a various of sequence learning tasks from different domains and the empirical demonstrate that R-Transformer achieves much stronger performance than both TCN and standard Transformer as well as other state-of-the-art sequence models. The rest of the paper is organized as follows: Section 2 discusses the sequence modeling problem we aim to solve; The proposed R-Transformer model is presented in Section 3. In Section 4, we describe the experimental details and discuss the . The related work is briefly reviewed in Section 5. Section 6 concludes this work. Before introducing the proposed R-Transformer model, we formally describe the sequence modeling problem. Given a sequence of length N: x 1, x 2, · · ·, x N, we aim to learn a function that maps the input sequence into a label space Y: where y ∈ Y is the label of the input sequence. Depending on the definition of label y, many tasks can be formatted as the sequence modeling problem defined above. For example, in language modeling task, x t is the character/word in a textual sentence and y is the character/word at next position ; in session-based recommendation, x t is the user-item interaction in a session and y is the future item that users will interact with ; when x t is a nucleotide in a DNA sequence and y is its function, this problem becomes a DNA function prediction task . Note that, in this paper, we do not consider the sequence-to-sequence learning problems. However, the proposed model can be easily extended to solve these problems and we will leave it as one future work. The proposed R-Transformer consists of a stack of identical layers. Each layer has 3 components that are organized hierarchically and the architecture of the layer structure is shown in Figure 1. As shown in the figure, the lower level is the local recurrent neural networks that are designed to model local structures in a sequence; the middle level is a multi-head attention that is able to capture global long-term dependencies; and the upper level is a position-wise feedforward networks which conducts a non-linear feature transformation. Next, we describe each level in detail. Sequential data such as natural language inherently exhibits strong local structures. Thus, it is desirable and necessary to design components to model such locality. In this subsection, we propose to take the advantage of RNNs to achieve this. Unlike previous works where RNNs are often applied to the whole sequence, we instead reorganize the original long sequence into many short sequences which only contain local information and are processed by a shared RNN independently and identically. In particular, we construct a local window of size M for each target position such that the local window includes M consecutive positions and ends at the target position. Thus, positions in each local window form a local short sequence, from which the shared RNN will learn a latent representation. In this way, the local structure information of each local region of the sequence is explicitly incorporated in the learned latent representations. We refer to the shared RNN as LocalRNN. Comparing to original RNN operation, LocalRNN only focuses on local short-term dependencies without considering any long-term dependencies. Figure 2 shows the different between original RNN and LocalRNN operations. Concretely, given the positions x t−M +1, x t−M +2, · · ·, x t of a local short sequence of length M, the LocalRNN processes them sequentially and outputs M hidden states, the last of which is used as the representation of the local short sequences: where RNN denotes any RNN cell such as Vanilla RNN cell, LSTM, GRU, etc. To enable the model to process the sequence in an auto-regressive manner and take care that no future information is available when processing one position, we pad the input sequence by (M − 1) positions before the start of a sequence. Thus, from sequence perspective, the LocalRNN takes an input sequence and outputs a sequence of hidden representations that incorporate information of local regions: The localRNN is analogous to 1-D Convolution Neural Networks where each local window is processed by convolution operations. However, the convolution operation completely ignores the sequential information of positions within the local window. Although the position embeddings have been proposed to mitigate this problem, a major deficiency of this approach is that the effectiveness of the position embedding could be limited; thus it requires considerable amount of extra efforts . On the other hand, the LocalRNN is able to fully capture the sequential information within each window. In addition, the one-by-one sliding operation also naturally incorporates the global sequential information. Discussion: RNNs have long been a dominating choice for sequence modeling but it severely suffers from two problems -The first one is its limited ability to capture the long-term dependencies and the second one is the time complexity, which is linear to the sequence length. However, in LocalRNN, these problems are naturally mitigated. Because the LocalRNN is applied to a short sequence within a local window of fixed size, where no long-term dependency is needed to capture. In addition, the computation procedures for processing the short sequences are independent of each other. Therefore, it is very straightforward for the parallel implementation (e.g., using GPUs), which can greatly improve the computation efficiency. The RNNs at the lower level introduced in the previous subsection will refine representation of each positions such that it incorporates its local information. In this subsection, we build a sub-layer on top of the LocalRNN to capture the global long-term dependencies. We term it as pooling sublayer because it functions similarly to the pooling operation in CNNs. Recent works have shown that the multi-head attention mechanism is extremely effective to learn the long-term dependencies, as it allows a direct connection between every pair of positions. More specifically, in the multihead attention mechanism, each position will attend to all the positions in the past and obtains a set of attention scores that are used to refine its representation. Mathematically, given current representations h 1, h 2, · · ·, h t, the refined new representations u t are calculated as: where head k (h t) is the of k th attention pooling and W o is a linear projection matrix. Considering both efficiency and effectiveness, the scaled dot product is used as the attention function . Specifically, head i (h t) is the weighted sum of all value vectors and the weights are calculated by applying attention function to all the query, key pairs: where q, k i, and v i are the query, key, and value vectors and d k is the dimension of k i. Moreover, q, k i, and v i are obtained by projecting the input vectors into query, key and value spaces, respectively . They are formally defined as: where W q, W k and W v are the projection matrices and each attention pooling head i has its own projection matrices. As shown in Eq., each head i is obtained by letting h t attending to all the "past" positions, thus any long-term dependencies between h t and h i can be captured. In addition, different heads will focus on dependencies in different aspects. After obtaining the refined representation of each position by the multi-head attention mechanism, we add a position-wise fully connected feed-forward network sub-layer, which is applied to each position independently and identically. This feedforward network transforms the features non-linearly and is defined as follows: Following , We add a residual and layernorm connection between all the sub-layers. With all the aforementioned model components, we can now give a formal description of the overall architecture of an N -layer R-Transformer. For the i th layer (i ∈ {1, 2, · · · N}): where T is the length of the input sequence and x i t is the input position of the layer i at time step t. Comparing with TCN: R-Transformer is partly motivated by the hierarchical structure in , thus, we make a detailed comparison here. In TCN, the locality in sequences in captured by convolution filters. However, the sequential information within each receptive field is ignored by convolution operations. In contrast, the LocalRNN structure in R-Transformer can fully incorporate it by the sequential nature of RNNs. For modeling global long-term dependencies, TCN achieves it with dilated convolutions that operate on nonconsecutive positions. Although such operation leads to larger receptive fields in lower-level layers, it misses considerable amount of information from a large portion of positions in each layer. On the other hand, the multi-head attention pooling in R-Transformer considers every past positions and takes much more information into consideration than TCN. The proposed R-Transformer and standard Transformer enjoys similar long-term memorization capacities thanks to the multi-head attention mechanism . Nevertheless, two important features distinguish R-Transformer from the standard Transformer. First, R-Transformer explicitly and effectively captures the locality in sequences with the novel LocalRNN structure while standard Transformer models it very vaguely with multi-head attention that operates on all of the positions. Second, R-Transformer does not rely on any position embeddings as Transformer does. In fact, the benefits of simple position embeddings are very . In the next section, we will empirically demonstrate the advantages of R-Transformer over both TCN and the standard Transformer. Since the R-Transformer is a general sequential learning framework, we evaluate it with sequential data from various domains including images, audios and natural languages. We mainly compare it with canonical recurrent architectures (Vanilla RNN, GRU, LSTM) and two of the most popular generic sequence models that do not have any recurrent structures, namely, TCN and Transformer. However, since the majority of existing efforts to enhance Transformer are for natural languages, in the natural language evaluation, we also include one recent advanced Transformer, i.e., Transformer-XL. For all the tasks, Transformer and R-Transformer were implemented with Pytorch and the for canonical recurrent architectures and TCN were directly copied from as we follow the same experimental settings. In addition, to make the comparison fair, we use the same set of hyperparameters (i.e, hidden size, number of layers, number of heads) for R-Transformer and Transformer. Moreover, unless specified otherwise, for training, all models are trained with same optimizer and learning rate is chosen from the same set of values according to validation performance. In addition, the learning rate annealed such that it is reduced when validation performance reaches plateau. This task is designed to test model ability to memorize long-term dependencies. It was firstly proposed by and has been used by many previous works (; ;). Following previous settings, we rescale each 28 × 28 image in MNIST dataset into a 784 × 1 sequence, which will be classified into ten categories (each image corresponds to one of the digits from 0 to 9) by the sequence models. Since the rescaling could make pixels that are connected in the origin images far apart from each other, it requires the sequence models to learn very long-term dependencies to understand the content of each sequence. The dataset is split into training and testing sets as same as the default ones in Pytorch(version 1.0.0) 2. The model hyperparameters and classification accuracy are reported in Table 1. From the table, it can be observed that firstly, RNNs based methods generally perform worse than others. This is because the input sequences exhibit very long-term dependencies and it is extremely difficult for RNNs to memorize them. On the other hand, methods that build direct connections among positions, i.e., Transformer, TCN, achieve much better . It is also interesting to see that TCN is slightly better than Transformer, we argue that this is because the standard Transformer cannot model the locality very well. However, our proposed R-Transformer that leverages LocalRNN to incorporate local information, has achieved better performance than TCN. -3.46 LSTM -3.29 TCN 4 /150 3.07 Transformer 3/160 3.34 R-Transformer 3/160 2.37 Next, we evaluate R-Transformer on the task of polyphonic music modeling with Nottingham dataset . This dataset collects British and American folk tunes and has been commonly used in previous works to investigate the model's ability for polyphonic music modeling (; ;). Following the same setting in , we split the data into training, validation, and testing sets which contains 694, 173 and 170 tunes, respectively. The learning rate is chosen from {5e −4, 5e −5, 5e −6} and dropout with probability of 0.1 is used to avoid overfitting. Moreover, gradient clipping is used during the training process. We choose negative log-likelihood (NLL) as the evaluation metrics and lower value indicates better performance. The experimental are shown in Table 2. Both LTSM and TCN outperform Transformer in this task. We suspect this is because these music tunes exhibit strong local structures. While Transformer is equipped with multi-head attention mechanism that is effective to capture long-term dependencies, it fails to capture local structures in sequences that could provide strong signals. On the other hand, R-Transformer enhanced by LocalRNN has achieved much better than Transformer. In addition, it also outperforms TCN by a large margin. This is expected because TCN tends to ignore the sequential information in the local structure, which can play an important role as suggested by . In this subsection, we further evaluate R-Transformer's ability on both character-level and wordlevel language modeling tasks. The dataset we use is PennTreebank(PTB) that contains 1 million words and has been extensively used by previous works to investigate sequence models (; ; ;). For character-level language modeling task, the model is required to predict the next character given a context. Following the experimental settings in , we split the dataset into train-ing, validation and testing sets that contains 5059K, 396K and 446K characters, respectively. For Transformer and R-Transformer, the learning rate is chosen from {1, 2, 3} and dropout rate is 0.15. Gradient clipping is also used during the training process. The bpc is used to measure the predicting performance. For word-level language modeling, the models are required to predict the next word given the contextual words. Similarly, we follow previous works and split PTB into training, validation, and testing sets with 888K, 70K and 79K words, respectively. The vocabulary size of PTB is 10K. As with character-level language modeling,the learning rate is chosen from {1, 2, 3} for Transformer and R-Transformer and dropout rate is 0.35. In this task, we also add Transformer-XL as one baseline, which has been particularly designed for language modeling tasks and has achieved state-of-the-art performance. Note that to make the comparison fair, we apply the same model configuration, i.e., number of layers, to Transformer-XL. All other settings such as optimizer are the same as its original ones. The learning rate is chosen from {0.01, 0.001, 0.0001} and its best validation performance is achieved with 0.001. Note that, except dropout, no other regularization tricks such as variational dropout and weight dropout are applied. The prediction performance is evaluated with perplexity, the lower value of which denotes better performance. The experimental of character-level and word-level language modeling tasks are shown in Table 3 and Table 4, respectively. Several observations can be made from the Table 3. First, Transformer performs only slightly better than RNNs while much worse than other models. The reason for this observation is similar to the case of polyphonic music modeling task that language exhibits strong local structures and standard Transformer can not fully capture them. Second, TCN achieves better than all of the RNNs, which is attributed to its ability to capture both local structures and long-term dependencies in languages. Notably, for both local structures and longterm dependencies, R-Transformer has more powerful components than TCN, i.e., LocalRNN and Multi-head attention. Therefore, it is not surprising to see that R-Transformer achieves significantly better . Table 4 presents the for word-level language modeling. Similar trends are observed, with the only exception that LSTM achieves the best among all the methods. In addition, the of Transformer-XL is only slightly better than R-transformer. Considering the fact that Transformer-XL is specifically designed for language modeling and employs the recurrent connection of segments , this suggests the limited contribution of engineered positional embeddings. In summary, experimental have shown that the standard Transformer can achieve better than RNNs when sequences exhibit very long-term dependencies, i.e., sequential MNIST while its performance can drop dramatically when strong locality exists in sequences, i.e., polyphonic music and language. Meanwhile, TCN is a very strong sequence model that can effectively learn both local structures and long-term dependencies and has very stable performance in different tasks. More importantly, the proposed R-Transformer that combines a lower level LocalRNN and a higher level multi-head attention, outperforms both TCN and Transformer by a large margin consistently in most of the tasks. The experiments are conducted on various sequential learning tasks with datasets from different domains. Moreover, all experimental settings are fair to all baselines. Thus, the observations from the experiments are reliable with the current experimental settings. However, due to the computational limitations, we are currently restricted our evaluation settings to moderate model and dataset sizes. Thus, more evaluations on big models and large datasets can make the more convincing. We would like to leave this as one future work. Recurrent Neural Networks including its variants such LSTM and GRU have long been the default choices for generic sequence modeling. A RNN sequentially processes each position in a sequence and maintains an internal hidden state to compresses information of positions that have been seen. While its design is appealing and it has been successfully applied in various tasks, several problems caused by its recursive structures including low computation efficiency and gradient exploding or vanishing make it ineffective when learning long sequences. Therefore, in recent years, a lot of efforts has been made to develop models 3 /700 78.93 TCN 4 without recursive structures and they can be roughly divided into two categories depending whether they rely on convolutions operations or not. The first category includes models that mainly built on convolution operations. For example, van den Oord et al. have designed an autoregressive WaveNet that is based on causal filters and dilated convolution to capture both global and local information in raw audios . Ghring et al. has successfully replace traditional RNN based encoder and decoder with convolutional ones and outperforms LSTM setup in neural machine translation tasks (; . Moreover, researchers introduced gate mechanism into convolutions structures to model sequential dependencies in languages . Most recently, a generic architecture for sequence modeling, termed as Temporal Convolutional Networks (TCN), that combines components from previous works has been proposed in . Authors in have systematically compared TCN with canonical recurrent networks in a wide range of tasks and TCN is able achieve better performance in most cases. Our R-transformer is motivated by works in this group in a sense that we firstly models local information and then focus on global ones. The most popular works in second category are those based on multi-head attention mechanism. The multi-head attention mechanism was firstly proposed in , where impressive performance in machine translation task has been achieved with Transformer. It was then frequently used in other sequence learning models (; ;). The success of multi-head attention largely comes from its ability to learn long-term dependencies through direct connections between any pair of positions. However, it heavily relies on position embeddings that have limited effects and require a fair amount of effort to design effective ones. In addition, our empirical shown that the local information could easily to be ignored by multi-head attention even with the existence of position embeddings. Unlike previously proposed Transformer-like models, R-Transformer in this work leverages the strength of RNN and is able model the local structures effectively without the need of any position embeddings. In this paper, we propose a novel generic sequence model that enjoys the advantages of both RNN and the multi-head attention while mitigating their disadvantages. Specifically, it consists of a LocalRNN that learns the local structures without suffering from any of the weaknesses of RNN and a multi-head attention pooling that effectively captures long-term dependencies without any help of position embeddings. In addition, the model can be easily implemented with full parallelization over the positions in a sequence. The empirical on sequence modeling tasks from a wide range of domains have demonstrated the remarkable advantages of R-Transformer over state-of-the-art nonrecurrent sequence models such as TCN and standard Transformer as well as canonical recurrent architectures. | This paper proposes an effective generic sequence model which leverages the strengths of both RNNs and Multi-head attention. | 621 | scitldr |
Many tasks in natural language processing and related domains require high precision output that obeys dataset-specific constraints. This level of fine-grained control can be difficult to obtain in large-scale neural network models. In this work, we propose a structured latent-variable approach that adds discrete control states within a standard autoregressive neural paradigm. Under this formulation, we can include a range of rich, posterior constraints to enforce task-specific knowledge that is effectively trained into the neural model. This approach allows us to provide arbitrary grounding of internal model decisions, without sacrificing any representational power of neural models. Experiments consider applications of this approach for text generation and part-of-speech induction. For natural language generation, we find that this method improves over standard benchmarks, while also providing fine-grained control. | A structured latent-variable approach that adds discrete control states within a standard autoregressive neural paradigm to provide arbitrary grounding of internal model decisions, without sacrificing any representational power of neural models. | 622 | scitldr |
Suppose a deep classification model is trained with samples that need to be kept private for privacy or confidentiality reasons. In this setting, can an adversary obtain the private samples if the classification model is given to the adversary? We call this reverse engineering against the classification model the Classifier-to-Generator (C2G) Attack. This situation arises when the classification model is embedded into mobile devices for offline prediction (e.g., object recognition for the automatic driving car and face recognition for mobile phone authentication). For C2G attack, we introduce a novel GAN, PreImageGAN. In PreImageGAN, the generator is designed to estimate the the sample distribution conditioned by the preimage of classification model $f$, $P(X|f(X)=y)$, where $X$ is the random variable on the sample space and $y$ is the probability vector representing the target label arbitrary specified by the adversary. In experiments, we demonstrate PreImageGAN works successfully with hand-written character recognition and face recognition. In character recognition, we show that, given a recognition model of hand-written digits, PreImageGAN allows the adversary to extract alphabet letter images without knowing that the model is built for alphabet letter images. In face recognition, we show that, when an adversary obtains a face recognition model for a set of individuals, PreImageGAN allows the adversary to extract face images of specific individuals contained in the set, even when the adversary has no knowledge of the face of the individuals. Recent rapid advances in deep learning technologies are expected to promote the application of deep learning to online services with recognition of complex objects. Let us consider the face recognition task as an example. The probabilistic classification model f takes a face image x and the model predicts the probability of which the given face image is associated with an individual t, f (x) ≃ Pr[T = t|X = x].The following three scenarios pose situations that probabilistic classification models need to revealed in public for online services in real applications:Prediction with cloud environment: Suppose an enterprise provides an online prediction service with a cloud environment, in which the service takes input from a user and returns predictions to the user in an online manner. The enterprise needs to deploy the model f into the cloud to achieve this. Prediction with private information: Suppose an enterprise develops a prediction model f (e.g., disease risk prediction) and a user wishes to have a prediction of the model with private input (e.g., personal genetic information). The most straightforward way to preserve the user's privacy entirely is to let the user download the entire model and perform prediction on the user side locally. Offline prediction: Automatic driving cars or laptops with face authentication contain face/object recognition systems in the device. Since these devices are for mobile use and need to work standalone, the full model f needs to be embedded in the device. In such situations that classification model f is revealed, we consider a reverse-engineering problem of models with deep architectures. Let D tr and d X,T be a set of training samples and its underlying distribution, respectively. Let f be a model trained with D tr. In this situation, is it possible for an adversary to obtain the training samples D tr (or its underlying distribution d X,T) if the classification model is given to the adversary?. If this is possible, this can cause serious problems, particularly when D tr or d X,T is private or confidential information. Privacy violation by releasing face authentication: Let us consider the face authentication task as an example again. Suppose an adversary is given the classification model f. The adversary aims to estimate the data (face) distribution of a target individual t *, d X|T =t *. If this kind of reverseengineering works successfully, serious privacy violation arises because individual faces are private information. Furthermore, once d X|T =t * is revealed, the adversary can draw samples from d X|T =t *, which would cause another privacy violation (say, the adversary can draw an arbitrary number of the target's face images).Confidential information leakage by releasing object recognizer: Let us consider an object recognition system for automatic driving cars. Suppose a model f takes as input images from car-mounted cameras and detect various objects such as traffic signs or traffic lights. Given f, the reverse engineering reveals the sample distribution of the training samples, which might help adversaries having malicious intentions. For example, generation of adversarial examples that make the recognition system confuse without being detected would be possible. Also, this kind of attack allows exposure of hidden functionalities for privileged users or unexpected vulnerabilities of the system. If this kind of attack is possible, it indicates that careful treatment is needed before releasing model f in public considering that publication of f might cause serious problems as listed above. We name this type of reverse engineering classifier-to-generator (C2G) attack. In principle, estimation of labeled sample distributions from a classification/recognition model of complex objects (e.g., face images) is a difficult task because of the following two reasons. First, estimation of generative models of complex objects is believed to be a challenging problem itself. Second, model f often does not contain sufficient information to estimate the generative model of samples. In supervised classification, the label space is always much more abstract than the sample space. The classification model thus makes use of only a limited amount of information in the sample space that is sufficient to classify objects into the abstract label space. In this sense, it is difficult to estimate the sample distribution given only classification model f.To resolve the first difficulty, we employ Generative Adversarial Networks (GANs). GANs are a neural network architecture for generative models which has developed dramatically in the field of deep learning. Also, we exploit one remarkable property of GANs, the ability to interpolate latent variables of inputs. With this interpolation, GANs can generate samples (say, images) that are not included in the training samples, but realistic samples 1.Even with this powerful generation ability of GANs, it is difficult to resolve the second difficulty. To overcome this for the C2G attack, we assume that the adversary can make use of unlabeled auxiliary samples D aux as knowledge. Suppose f be a face recognition model that recognizes Alice and Bob, and the adversary tries to extract Alice's face image from f. It is natural to suppose that the adversary can use public face image samples that do not contain Alice's and Bob's face images as D aux. PreImageGAN exploits unlabeled auxiliary samples to complement knowledge extracted from the model f. The contribution of this study is summarized as follows.• We formulate the Classifier-to-Generator (C2G) Attack, which estimates the training sample distribution when a classification model and auxiliary samples are given(Section 3) • We propose PreImageGAN as an algorithm for the C2G attack. The proposed method estimates the sample generation model using the interpolation ability of GANs even when the auxiliary samples used by the adversary is not drawn from the same distribution as the training sample distribution (Section 4)1 reported GANs could generate intermediate images between two different images. Also, realizes the operation of latent vectors. For example, by subtracting a latent vector of a man's face from a face image of a man wearing glasses, and then adding a latent vector of a female's face, then the GAN can generate the woman's face image wearing glasses.• We demonstrate the performance of C2G attack with PreImageGAN using EMNIST (alphanumeric image dataset) and FaceScrub (face image dataset). Experimental show that the adversary can estimate the sample distribution even when the adversary has no samples associated with the target label at all (Section 5) Generative Adversarial Networks (GANs) is a recently developed methodology for designing generative models proposed by BID6. Given a set of samples, GANs is an algorithm with deep architectures that estimates the sample-generating distribution. One significant property of GANs is that it is expected to be able to accurately estimate the sample distribution even when the sample space is in the high dimensional space, and the target distribution is highly complex, such as face images or natural images. In this section, we introduce the basic concept of GANs and its variants. The learning algorithm of GANs is formulated by minimax games consisting of two players, generator and discriminator BID6 ). Generator G generates a fake sample G(z) using a random number z ∼ d Z drawn from any distribution (say, uniform distribution). Discriminator D is a supervised model and is trained so that it outputs 1 if the input is a real sample x ∼ d X drawn from the sample generating distribution d X; it outputs 0 or −1 if the input is a fake sample G(z).The generator is trained so that the discriminator determines a fake sample as a real sample. By training the generator under the setting above, we can expect that samples generated from G(z) for arbitrary z are indistinguishable from real samples x ∼ d X. Letting Z be the random variable of d Z, G(Z) can be regarded as the distribution of samples generated by the generator. Training of GANs is known to be reduced to optimization of G so that the distribution between G(Z) and the data generating distribution d X is minimized in a certain type of divergence BID6 ).Training of GAN proposed by BID6 (VanillaGAN) is shown to be reduced to minimization e of Jensen Shannon (JS) divergence of G(Z) and d X. Minimization of JS-divergence often suffers gradient explosion and mode collapse BID6, ). To overcome these problems, Wasserstein-GAN (WGAN), GAN that minimizes Wasserstein distance between G(Z) and d X, was proposed, ). As a method to stabilize convergence behavior of WGAN, a method to add a regularization term called Gradient Penalty (GP) to the loss function of the discriminator was introduced BID8 ).Given a set of labeled samples {(x, c), · · · } where c denotes the label, Auxiliary Classifier GAN (ACGAN) was proposed as a GAN to estimate d X|C=c, sample distribution conditioned by label c BID16 ). Differently from VanillaGAN, the generator of ACGAN takes as input a random noise z and a label c. Also, the discriminator of ACGAN is trained to predict a label of sample in addition to estimation of real or fake samples. In the learning process of ACGAN, generator is trained so that discriminator predicts correctly the label of generated sample in addition. The generator of ACGAN can generate samples with a label specified arbitrarily. For example, when x corresponds to face images and c corresponds to age or gender, ACGAN can generate images with specifying the age or gender BID13, BID5 ). In our proposed algorithm introduced in the latter sections, we employ WGAN and ACGAN as building blocks.3 PROBLEM FORMULATION We consider a supervised learning setting. Let T be the label set, and X ⊆ R d be the sample domain where d denotes the sample dimension. Let ρ t be the distribution of samples in X with label t. In face recognition, x ∈ X and t ∈ T correspond to a (face) image and an individual, respectively. ρ t thus denotes the distribution of face images of individual t. We suppose the images contained in the training dataset are associated with a label subset T tr ⊂ T. Then, the training dataset is defined as D tr = {(x, t)|x ∈ X, t ∈ T tr }. We denote the random Figure 1: Outline of Classifier-to-Generator (C2G) Attack. The publisher trains classifier f from training data D tr and publishes f to the adversary. However, the publisher does not wish to leak training data D tr and sample generating distribution ρ t by publishing f. The goal of the adversary is to learn the publisher's private distribution ρ t * for any t * ∈ T tr specified by the adversary provided model f, target label t * and (unlabeled) auxiliary samples D aux.variables associated with (x, t) by (X tr, T tr) Then, the distribution of X tr is given by a mixture distribution DISPLAYFORM0 where ∑ t∈Ttr α t = 1, α t > 0 for all t ∈ T tr. In the face recognition task example again, a training sample consists of a pair of an individual t and his/her face image x, (x, t) where x ∼ ρ t.Next, we define the probabilistic discrimination model we consider in our problem. Let Y be a set of |T tr |-dimension probability vector, ∆ |Ttr|. Given a training dataset D tr, a learning algorithm L gives a probabilistic discrimination model f: DISPLAYFORM1 Here the tth element of the output (f (x)) t of f corresponds to the probability with which x has label t. Letting T tr and X tr represents the random variable of T tr and d Xtr, f is the approximation of Pr[T tr |X tr]. We define the Classifier-to-Generator Attack (C2G Attack) in this section. We consider two stakeholders, publisher and adversary in this attack. The publisher holds training dataset D tr drawn from d Xtr and a learning algorithm L. She trains model f = L(D tr) and publishes f to the adversary. We suppose training dataset D tr and data generating distribution ρ t for any t ∈ T tr is private or confidential information of the publisher, and the publisher does not wish to leak them by publishing f.Given f and T tr, the adversary aims to obtain ρ t * for any label t * ∈ T tr specified by the adversary. We suppose the adversary can make use of an auxiliary dataset D aux drawn from underlying distribution d Xaux as knowledge. D aux is a set of samples associated with labels in T aux ⊂ T. We remark that D aux is defined as a set of samples associated with a specific set of labels, however, in our algorithm described in the following sections, we do not require that samples in D aux are labeled. Then, the underlying distribution d Xaux is defined as follows: DISPLAYFORM0 where DISPLAYFORM1 The richness of the knowledge can be determined by the relation between T tr and T aux. When T tr = T aux, d Xtr = d Xaux holds. That is, the adversary can make use of samples drawn from the distribution that is exactly same as that of the publisher. In this sense, this setting is the most advantageous to the adversary. If t * / ∈ T aux, the adversary cannot make use of samples with the target label t *; this setting is more advantageous to the publisher. As the overlap between T tr and T aux increases, the situation becomes more advantageous to the adversary. Discussions on the knowledge of the adversary are given in 3.4 in detail. The goal of the adversary is to learn the publisher's private distribution ρ t * for any t * ∈ T tr specified by the adversary provided model f, target label t * and auxiliary (unlabeled) samples D aux. Let A be the adversary's attack algorithm. Then, the attack by the adversary can be formulated bŷ DISPLAYFORM2 where the output of A is a distribution over X. In the face recognition example of Alice and Bob again, when the target label of the adversary is t * =Alice, the objective of the adversary is to estimate the distribution of face images of Alice by A(f, D aux, t *). The objective of the C2G attack to estimate ρ t *, the private data generating distribution of the publisher. In principle, the measure of the success of the C2G attack is evaluated with the quasi-distance between the underlying distribution ρ t * and the estimated generative model A(f, D aux, t *). If the two distributions are close, we can confirm that the adversary successfully estimates ρ t *. However, ρ t * is unknown, and we cannot evaluate this quasi-distance directly. Instead of evaluating the distance of the two distributions directly, we evaluate the attack algorithm empirically. We first prepare a classifier f ′ that is trained with D tr using a learning algorithm different from f. We then give samples drawn from A(f, D aux, t *) to f ′ and evaluate the probability of which the label of the given samples are predicted as t *. We expect that the classifier f ′ would label samples drawn from A(f, D aux, t *) as t * with high probability if A(f, D aux, t *) successfully estimates ρ t *. Considering the possibility that A(f, D aux, t *) overfits to f, we employ another classifier f ′ for this evaluation. This evaluation criterion is the same as the inception accuracy introduced for ACGAN by BID16. In our setting, since our objective is to estimate the distribution concerning a specific label t *, we employ the following inception accuracy: DISPLAYFORM0 We remark that the generated model with a high inception accuracy is not always a reasonable estimation of ρ t *. Discrimination models with a deep architecture are often fooled with artifacts. For example, BID15 reported that images look like white noise for humans can be classified as a specific object with high probability. For this reason, we cannot conclude that a model with a high inception accuracy always generates meaningful images. To avoid this, the quality of generated images should be subjectively checked by humans. The evaluation criterion of the C2G Attack we employed for this study is similar to those for GANs. Since the objective of GANs and the C2G attack is to estimate unknown generative models, we cannot employ the pseudo distance between the underlying generating distribution and the estimated distribution. The evaluation criterion of GANs is still an open problem, and subjective evaluation is needed for evaluation of GANs BID7 ). In this study, we employ both the inception accuracy and subjective evaluation for performance evaluation of the C2G attack. The richness of the knowledge of the adversary affects the performance of the C2G attack significantly. We consider the following three levels of the knowledge. In the following, let T aux be the set of labels of samples generated by the underlying distribution of the auxiliary data, d Xaux. Also, let T tr be the set of labels of samples generated by the underlying distribution of the training data.• Exact same: T tr = T aux In this setting, we suppose T tr is exactly same as the T aux. Since D aux follows d Xtr, D aux contains samples with the target label. That is, the adversary can obtain samples labeled with the target label. The knowledge of the adversary in this setting is the most powerful among the three settings.• Partly same:t * / ∈ T aux, T aux ⊂ T tr In this setting, T aux and T tr are overlapping. However, T aux does not contain the target label. That is, the adversary cannot obtain samples labeled with the target label. In this sense, the knowledge of the adversary in this setting is not as precise as that in the former setting.• Mutually exclusive: T aux ∩ T tr = ∅ In this setting, we suppose T aux and T tr are mutually exclusive, and the adversary cannot obtain samples labeled with the target label. In this setting, the adversary cannot obtain any samples with labels used for training of model f. In this sense, the knowledge of the adversary in this setting is the poorest among the three settings. If the sample distribution of the auxiliary samples is close to that of the true underlying distribution, we can expect that estimation of d Xaux|Yaux can be used as an approximation of d Xtr|Ytr. More specifically, we can obtain the generative model of the target label d Xaux|Yaux=y (t *) by specifying the one-hot vector of the target label as the condition. As we mentioned in Section 3.4, the sample generating distribution of the auxiliary samples is not necessarily equal or close to the true sample generating distribution. In the "partly same" setting or "mutually exclusive" setting, D aux does not contain samples labeled with t * at all. It is well known that GANs can generate samples with interpolating latent variables BID2 ), Radford et al.. We expect that PreImageGAN generates samples with the target label by interpolation of latent variables of given samples without having samples of the target label. More specifically, if latent variables of given auxiliary samples are diverse enough and d Xaux|Yaux well approximates the true sample generating distribution, we expect that GAN can generate samples with the target label by interpolating obtained latent variables of auxiliary samples without having samples with the target label. Generator G: (Z, Y) → X of PreImageGAN generates fake samples x fake = G(z, y) using random draws of y and z. After the learning process is completed, we expect generated fake samples x fake satisfy f (x fake) = y. On the other hand, discriminator D: X → R takes as input a sample x and discriminates whether it is a generated fake sample x fake or a real sample x real ∈ D aux. FIG0: Inference on the space of y. Here, we suppose the adversary has auxiliary samples labeled with alphabets only and a probabilistic classification model that takes an image of a number and outputs the corresponding number. The axis corresponds to an element of the probabilistic vector outputted by the classification model. For example, y 9 = (f (x)) 9 denotes the probability that the model discriminates the input image as "9". The green region in the figure describes the spanned by auxiliary samples in D aux. D aux does not contain images of numbers classified so that y 9 = 1 or y 8 = 1 whereas PreImageGAN generates samples close to "9" by interpolating latent variables of images that are close to "9" such as "Q" and "g".With these requirements, the objective function of G and D is formulated as follows DISPLAYFORM0 where ∥ · ∥ L ≤ 1 denotes α-Lipschitz functions with α ≤ 1.By maximizing the first and second term concerning D, Wasserstein distance between the marginal of the generator ∫ G(Z, Y aux)dY aux and the generative distribution of auxiliary samples d Xaux is minimized. By maximizing the similarity between y and f (G (z, y) ), G is trained so that samples generated from G(z, y) satisfy f (G(z, y)) = y. γ ≥ 0 works as a parameter adjusts the effect of this term. For sample generation with PreImageGAN, G(Z, y (t *) ) is utilized as the estimation of ρ t *. We here remark that model f is regarded as a constant in the learning process of GAN and used as it is. In this section, we show that the proposed method enables to perform the C2G attack with experiments. We experimentally demonstrate that the adversary can successfully estimate ρ t * given classifier f and the set of unlabeled auxiliary samples D aux = {x|x ∈ X} even in the partly same setting and the mutually exclusive setting under some conditions. For demonstration, we consider a hand-written character classification problem (EMNIST) and a face recognition problem (FaceScrub). We used the Adam optimizer (α = 2 × 10 −4, β 1 = 0.5, β 2 = 0.999) for the training of the generator and discriminator. The batch size was set as 64. We set the number of discriminator (critic) iterations per each generator iteration n cric = 5. To enforce the 1-Lipschitz continuity of the discriminator, we add a gradient penalty (GP) term to the loss function of the discriminator BID8 ) and set the strength parameter of GP as λ = 10. We used 128-dim uniform random distribution [−1, 1] 128 as d Z. We estimated d Yaux empirically from {f (x)|x ∈ D aux } using kernel density estimation where the bandwidth is 0.01, and the Gaussian kernel was employed. EMNIST consists of grayscale 28x28 pixel images from 62 alphanumeric characters (0-9A-Za-z). We evaluate the C2G attack with changing the richness of the adversary's knowledge as discussed in Section 3.4 (exact same, partly same, and mutually exclusive) to investigate how the richness of the auxiliary data affects the . Also, to investigate how the choice of the auxiliary data affects the , we tested two different types of target labels as summarized in TAB1 (lower-case target) and TAB2 (numeric target). In the former setting, an alphanumeric classification model is given to the adversary. In the latter setting, a numeric classification model is given to the adversary. In this setting, the target label t * was set as lower-case characters (t * ∈ {a, b, . . ., z}) (TAB1). In the exact/partly same setting, an alphanumeric classifier (62 labels) is given to the adversary where the classifier is trained for ten epochs and achieved test accuracy 0.8443. In the mutually exclusive setting, an alphanumeric classifier (36 labels) given to the adversary where the classifier is trained for ten epochs and achieved test accuracy 0.9202. See TAB1 for the detailed settings. In the training process of PreImageGAN, we trained the generator for 20k iterations. We set the initial value of γ to 0, incremented gamma by 0.001 per generator iteration while γ is kept less than 10. Fig. 3 represents the of the C2G attack with targeting lower-case characters against given alphanumeric classification models. Alphabets whose lower-case and upper-case letters are similar (e.g., C, K) are easy to attack. So, we selected alphabets whose lower-case letter and upper-case letter shapes are dissimilar in Fig. 3.In the exact same setting, we can confirm that the PreImageGAN works quite successfully. In the partly same setting, some generated images are disfigured compared to the exact same setting (especially when t * = q) while most of the target labels are successfully reconstructed. In the mutually exclusive setting, some samples are disfigured (especially when t * = h, i, q) while remaining targets are successfully reconstructed. As an extreme case, we tested the case when the auxiliary data consists of images drawn from uniform random, and we observed that the C2G attack could generate no meaningful images (See Fig. 7 in Appendix A). From these , we can conclude that the C2G attack against alphanumeric classifiers works successfully except several although there is an exception in the mutually exclusive setting. We also tested the C2G attack when the target label t * was set as numeric characters (t * ∈ {0, 1, . . ., 9}). In the exact/partly same setting, an alphanumeric classifier (62 labels, test accuracy 0.8443.) is given to the adversary. In the mutually exclusive setting, a numeric classifier (10 labels, test accuracy 0.9911) given to the adversary where the classifier is trained for ten epochs and achieved test accuracy 0.9202. See TAB2 for the detailed settings. PreImageGAN was trained in the same setting as the previous subsection. Fig. 4 represents the of the C2G attack with targeting numeric characters against given classification models. In the exact/partly same setting, the PreImageGAN works quite successfully as well with some exceptions; e.g., "3" and "7" are slightly disfigured in the partly same setting. On the other hand, in the mutually exclusive setting, images targeting "0" and "1" look like the target numeric characters while remaining images are disfigured or look like other alphabets. As shown from these , in the mutually exclusive setting, the C2G attack against alphabets works well with while it fails when targeting numeric characters. One of the reasons for this failure is in the incompleteness of the classifier given to the attacker. More precisely, when the classifier recognizes images with non-target labels as a target labels falsely, C2G attack fails. For example, in Fig 4, images of "T" are generated as images of "7" in the mutually exclusive setting. This is because the given classifier recognizes images of "T" as "7" falsely, and the PreImageGAN thus generates images like "T" as "7". See Table 8 in Appendix B; many alphabets are recognized as numeric characters falsely. As long as the given classification model recognizes Figure 3: C2G attack against an alphanumeric classifier with changing the richness of the knowledge of the adversary targeting lowercase letters. The samples in the bottom row ("y: random") are generated when y is randomly drawn from empirically estimated d Yaux.non-target characters as target characters, the C2G attack cannot generate images of the target labels correctly. In Fig 3, images of "h", "i", and "q" are disfigured in the mutually exclusive setting. This disfiguring occurs for a different reason (see Table 6 in Appendix B; no significant false recognition can be found for these characters). We consider this is because the image manifold that the classifier recognizes as the target character does not exactly fit the image manifold of the target character. Since the images generated by the C2G attack for "h," "i", and "q" are recognized as "h," "i", and "q" by the classifier with a very high probability, the images work as adversarial examples. Detailed analysis of the failure of the C2G attack would give us a hint to establish defense methods against the C2G attack. Detailed consideration of this failure remains as future work. Finally, we evaluate the quality of the C2G attack; we measured the inception accuracy using. Here we employed a ResNet-based network architecture as f ′ (see Table E in detail). As a baseline, we trained ACGAN BID16 ] with the same training samples D tr and evaluated the inception accuracy. In the exact same setting, the inception accuracy is almost equal to ACGAN. From the , we can see that the inception scores drop as the knowledge of the adversary becomes poorer. This indicates that the knowledge of the adversary affects the of the C2G attack significantly. FaceScrub dataset consists of color face images (530 persons). We resized images to 64x64 pixel images for experiments and evaluated the C2G attack in the mutually exclusive setting. In detail, we picked up 100 people as T tr (see Appendix D for the list) and used remaining 430 persons as T aux (mutually exclusive setting). If the adversary can generate face images of the 100 people in T tr by utilizing model f recognizing T tr and face images with labels in T aux, we can confirm that the C2G attack works successfully. D tr consists of 12k images, and D aux consists of 53k images. f is trained on D tr for ten epochs and achieved test accuracy 0.8395. In training PreImageGAN, we train the generator for 130k iterations. We set the initial value of γ to 0, incremented gamma by 0.0001 per generator iteration while γ is kept less than 10. FIG3 represents the of the C2G attack against the face recognition model. Those samples are randomly generated without human selection. The generated face images well capture the features of the face images in the training samples. From the , we can see that the C2G attack successfully reconstruct training samples from the face recognition model without having training samples in the mutually exclusive setting. As byproducts, we can learn what kind of features the model used for face recognition with the of the C2G attack. For example, all generated face images of Keanu Reeves wear the mustache. This implies that f exploits his mustache to recognize Keanu Reeves. One may concern that the PreImageGAN simply picks up images in the auxiliary samples that look like the target quite well, but labeled as some other person. To show that the PreImageGAN does not simply pick up similar images to the target, but it generates images of the target by exploiting the features of face images extracted from the auxiliary images, we conducted two experiments. First, we evaluated the probability with which classifier f recognizes images in the auxiliary dataset and images generated by the PreImageGAN as the target (Keanu Reeves and Marg Helgenberger) in FIG4. The probabilities are sorted, and the top 500 are shown in the figure. The orange lines denote the probability with which the images in the auxiliary dataset is recognized as the target. A few images have a high probability (>0.80), but much less than the probabilities with which the target image in training data(blue lines) is recognized as the target (>0.80, 80 images). This indicates that the auxiliary samples do not contain images that are quite similar to the targets. The green lines denote the probability with which the images generated by the PreImageGAN are recognized as the target. As seen from the , the generated images are recognized as the target with extremely high probability (> 0.95). This suggests that the PreImageGAN could generate images recognized as a target with high probability from samples not recognized as the target. Figure 4: C2G attack against a numeric classification model with changing the richness of the knowledge of the adversary targeting numeric letters. The samples in the bottom row ("y: random") are images generated when y is randomly drawn from empirically estimated d Yaux. Table 4: Inception Accuracy with changing knowledge of the adversary (C2G attack against numeric classifier) target label: t * Setting 0 1 2 3 4 5 6 7 8 9 Baseline (ACGAN) 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 Exact same 1.00 1.00 1.00 1.00 1.00 1.00 0.99 1.00 1.00 1. Second, to demonstrate that the PreImgeGAN can generate a wide variety of images of the target, we generated images that interpolate two targets (see Appendix C for the settings and the ). As seen from the , the PreImageGAN can generate face images by exploiting both classifier f and the features extracted from the auxiliary dataset. Finally, we conducted our experiments Intel(R) Xeon(R) CPU E5-2623 v3 and a single GTX TITAN X (Maxwell), and it spends about 35 hours to complete the entire training process for the FaceScrub experiment. The computational capability of our environment is almost the same as the p2 instance (p2.xlarge) of Amazon Web Service. Usage of a p2.xlarge instance for 50 hours costs about $45. This means that the C2G attack is a quite practical attack anyone can try with a regular computational resource at low cost.6 RELATED WORK BID4 proposed the model inversion attack against machine learning algorithms that extract private input attributes from published predicted values. Through a case study of personalized adjustment of Warfaline dosage, BID4 showed that publishing predicted dosage amount can cause leakage of private input attributes (e.g., personal genetic information) in generalized linear regression. BID3 presented a model inversion attack that reconstructs face images from a face recognition model. The significant difference between the C2G attack and the model inversion attack is the goal of the adversary. In the model inversion attack, the adversary tries to estimate a private input (or input attributes) x from predicted values y = f (x) using the predictor f. Thus, the adversary's goal is the private input x itself in the model inversion attack. By contrast, in the C2G attack, the adversary's goal is to obtain the training sample distribution. Another difference is that the target network model. The target model of BID3 was a shallow neural network model while ours is deep neural networks. As the network architecture becomes deeper, it becomes more difficult to extract information about the input because the output of the model tends to be more abstract BID10 ). BID10 discussed leakage of training samples in collaborative learning based on the model inversion attack using the IcGAN BID17 ). In their setting, the adversary's goal is not to estimate training sample distribution but to extract training samples. Also, their demonstration is limited to small-scale datasets, such as MNIST dataset (hand-written digit grayscale images, 10 labels) and AT&T dataset (400 face grayscale images with 40 labels). By contrast, our experiments are demonstrated with larger datasets, such as EMNIST dataset (62 labels) and FaceScrub dataset (530 labels, 100,000+ color images). The of the C2G attack against face recognition in the mutually exclusive setting. We trained a face recognition model of 100 people (including Brad Pitt, Keanu Reeves, Nicolas Cage and Marg Helgenberger), and evaluated the C2G attack where the classification model for the 100 people is given to the adversary while no face images of the 100 people are not given. Generated samples are randomly selected, and we did not cherry-pick "good" samples. We can recognize the generated face images as the target label. This indicates that the C2G attack works successfully for the face recognition model. BID9 discussed the membership inference attack against a generative model trained by BEGAN BID2 ) or DCGAN . In the membership inference attack, the adversary's goal is to determine whether the sample is contained in the private training dataset; the problem and the goal are apparently different from ours. Images in the auxiliary data set are not recognized as the target with high probability while images generated by the PreImageGAN are recognized as the target with very high probability. Song et al. FORMULA0 discussed malicious regularizer to memorize private training dataset when the adversary can specify the learning algorithm and obtain the classifier trained on the private training data. Their experiments showed that the adversary can estimate training data samples from the classifier when the classifier is trained with malicious regularizer. Since our setting does not assume that the adversary can specify the learning algorithm, the problem setting is apparently different from ours. BID11 and BID12 consider the understanding representation of deep neural networks through reconstruction of input images from intermediate features. Their studies are related to ours in the sense that the algorithm exploits intermediate features to attain the goal. To the best of our knowledge, no attack algorithm has been presented to estimate private training sample distribution as the C2G attack achieves. As described in this paper, we formulated the Classifier-to-Generator (C2G) Attack, which estimates the training sample distribution ρ t * from given classification model f and auxiliary dataset D tr. As an algorithm for C2G attack, we proposed PreImageGAN which is based on ACGAN and WGAN. Fig. 7 represents the of the C2G attack when the auxiliary data consists of noisy images which are drawn from the uniform distribution. All generated images look like noise images, not numeric letters. This reveals that the C2G attack fails when the auxiliary dataset is not sufficiently informative. More specifically, we can consider the C2G attack fails when the attacker does not have appropriate knowledge of the training data distribution.(a) t * = 0 DISPLAYFORM0 Figure 7: Images generated by the C2G attack when the target label is set as t * = 0, 1, 2 and uniformly generated noise images are used as the auxiliary dataset. We used an alphanumeric letter classifier (label num:62) described in Sec. 5.2 as f for this experiment. Images generated by the C2G attack is significantly affected by the property of the classification model f given to the C2G attack. To investigate this, we measured how often non-target characters are falsely recognized as a target character by classification model f with high probability (greater than 0.9). The tables shown below in this subsection contain at most the top-five falsely-recognized characters for each target label. If no more than five characters are falsely recognized with high probability, the fields remain blank. We consider an alphanumeric classifier f trained in the exactly/partly same setting of TAB1 represents the characters falsely recognized as the target label with high probability. Similarly, for an alphanumeric classifier f trained in the mutually exclusive setting of TAB1 represents the characters falsely recognized as the target label with high probability. Table 6: Top-five non-target characters falsely recognized as target characters (lower-case) with high probability by alphanumeric (lower-case, numeric) classifier. (X: 0.123) means that the classifier misclassified 12.3%(0.123) of images of X as the target character with high probability (>0.9). As confirmed by the of TAB5, few letters are falsely recognized by alphanumeric classifier f in the exactly/partly same setting. This support the fact that the C2G attack works quite successfully in this setting (see Figure 3). In Table 6, "E" and "F" are falsely recognized as "e" and "f", respectively frequently, while the C2G attack could successfully generate "e" and "f" in Figure 3. This is because ("e", "E") and ("f", "F") have similar shapes and this false recognition of the model does not (fortunately) disfiguring in generation of the images. In Figure 3, images of "i" and "q" in the mutually exclusive setting are somewhat disfigured while no false recognition of these characters is found in Table 6. This disfiguring is supposed to occur because the classification model f does not necessarily exploit the entire structure of "i" and "q"; the image manifold that f recognizes as "i" and "q" does not exactly fits the image manifold of "i" and "q". Next, we consider an alphanumeric classifier f trained in the exactly/partly same setting of TAB2 represents the characters falsely recognized as the target label with high probability. Similarly, for a numeric classifier f trained in the mutually exclusive setting of TAB2 represents the characters falsely recognized as the target label with high probability. In the exactly/partly same setting, we see in Table 7 that not many letters are falsely recognized as non-target characters. This support the fact that the C2G attack against numeric characters works successfully in the exactly/partly same setting (see Figure 4).On the other hand, in the mutually exclusive setting, Table 8 reveals that many non-target characters are falsely recognized as the target characters. In Figure 4, generated images of "6", "7" and "8" look like "h", "T" and "e", respectively. In Table 8, images of "h", "T" and "e" is falsely recognized as "6", "7" and "8", respectively. From this analysis, if the classifier falsely recognized non-target images as the target label, the C2G attack fails and PreImageGAN generates non-target images as target images. In Figure 4, images of 0,1,5 and 9 seem to be generated successfully while images for the other numeric characters are more or less disfigured. In Table 8, "O", "l", "s" and "q" are falsely recognized as "0", "1","5" and "9", respectively. This suggests that f does not necessarily contain appropriate features to recognize the target. However, the C2G attack could generate images that are quite similar to the target characters using f, and eventually, the C2G attack generated images look like "0", "1", "5" and "9" successfully. For the remaining characters, since f does not contain the necessary information to generate the images of the target, the ing images are disfigured. (a) Interpolation on both z and y. z and y is randomly chosen from dZ and dY aux, respectively.(b) Interpolation on z with fixing y. y is set to one-hot vector in which the element corresponds to "Blad Pitt" is activated.(c) Interpolation on y with fixing z. | Estimation of training data distribution from trained classifier using GAN. | 623 | scitldr |
The goal of standard compressive sensing is to estimate an unknown vector from linear measurements under the assumption of sparsity in some basis. Recently, it has been shown that significantly fewer measurements may be required if the sparsity assumption is replaced by the assumption that the unknown vector lies near the range of a suitably-chosen generative model. In particular, in (Bora {\em et al.}, 2017) it was shown that roughly $O(k\log L)$ random Gaussian measurements suffice for accurate recovery when the $k$-input generative model is bounded and $L$-Lipschitz, and that $O(kd \log w)$ measurements suffice for $k$-input ReLU networks with depth $d$ and width $w$. In this paper, we establish corresponding algorithm-independent lower bounds on the sample complexity using tools from minimax statistical analysis. In accordance with the above upper bounds, our are summarized as follows: (i) We construct an $L$-Lipschitz generative model capable of generating group-sparse signals, and show that the ing necessary number of measurements is $\Omega(k \log L)$; (ii) Using similar ideas, we construct two-layer ReLU networks of high width requiring $\Omega(k \log w)$ measurements, as well as lower-width deep ReLU networks requiring $\Omega(k d)$ measurements. As a , we establish that the scaling laws derived in (Bora {\em et al.}, 2017) are optimal or near-optimal in the absence of further assumptions. The problem of sparse estimation via linear measurements (commonly referred to as compressive sensing) is well-understood, with theoretical developments including sharp performance bounds for both practical algorithms and (potentially intractable) information-theoretically optimal algorithms. Following the tremendous success of deep generative models in a variety of applications, a new perspective on compressive sensing was recently introduced, in which the sparsity assumption is replaced by the assumption of the underlying signal being well-modeled by a generative model (typically corresponding to a deep neural network). This approach was seen to exhibit impressive performance in experiments, with reductions in the number of measurements by large factors such as 5 to 10 compared to sparsity-based methods. In addition, provided theoretical guarantees on their proposed algorithm, essentially showing that an L-Lipschitz generative model with bounded k-dimensional inputs leads to reliable recovery with m = O(k log L) random Gaussian measurements (see Section 2 for a precise statement). Moreover, for a ReLU network generative model from R k to R n with width w and depth d, it suffices to have m = O(kd log w). A variety of follow-up works provided additional theoretical guarantees (e.g., for more specific optimization algorithms, more general models, or under random neural network weights ) for compressive sensing with generative models, but the main of are by far the most relevant to ours. In this paper, we address a prominent gap in the existing literature by establishing algorithmindependent lower bounds on the number of measurements needed (e.g., this is explicitly posed as an open problem in ). Using tools from minimax statistical analysis, we show that for generative models satisfying the assumptions of, the above-mentioned dependencies m = O(k log L) and m = O(kd log w) cannot be improved (or in the latter case, cannot be improved by more than a log n factor) without further assumptions. Our argument is essentially based on a reduction to compressive sensing with a group sparsity model (e.g., see ), i.e., forming a neural network that is capable of producing such signals. The proofs are presented in the full paper. We begin by stating a simple corollary of a main of Bora et al.. As we show in, this is obtained by extending [10, Thm. 1.2] from spherical to rectangular domains, and then converting the high-probability bound to an average one. In, we also handle the case of spherical domains. for a universal constant C, we have for a universal constant C that In the following, we construct a Lipschitz-continuous generative model that can generate bounded kgroup-sparse vectors. Then, by making use of minimax statistical analysis for group-sparse recovery, we provide an information-theoretic lower bounds that matches the upper bound in Corollary 1. More precisely, we say that a signal in R n is k-group-sparse if, when divided into k blocks of size n k, each block contains at most one non-zero entry. We define Our construction of the generative function G: is given as follows:... figure shows the mapping from z1 → (x1, . . ., x n/k), and the same relation holds for z2 → (x n/k+1, . . ., x 2n/k), etc. up to z k → (x n−k+1, . . ., xn). • The output x ∈ R n is divided into k sub-sequences of length is only a function of the corresponding input z i, for i = 1,..., k. • The mapping from z i to x (i) is as shown in Figure 1. The interval [−r, r] is divided into n k intervals of length 2rk n, and the jth entry of x (i) can only be nonzero if z i takes a value in the j-th interval. Within that interval, the mapping takes a "doubletriangular" shape -the endpoints and midpoint are mapped to zero, the points It is easy to show that the generative model G: kr. In addition, using similar steps to the case of k-sparse recovery, we are able to obtain a minimax lower bound for k-group-sparse recovery, which holds when x max is not too small. Based on these , the following sample complexity lower bound is proved in. n (and associated output dimension n) such that, for any A ∈ R m×n satisfying A 2 F = C A n, any algorithm that produces somex satisfying sup with a sufficiently large implied constant, which is a very mild assumption since for fixed r and α, the right-hand side tends to zero as k grows large (whereas typical Lipschitz constants are at least equal to one, if not much higher). In this section, as opposed to considering general Lipschitz-continuous generative models, we focus on generative models given by neural networks with ReLU activations. Similar to the derivation of Corollary 1, we have the following corollary for ReLU-based networks from [10, Thm. 2 such that for a universal constant C, sup Note that this holds even when the domain D = R k, so we do not need to distinguish between the rectangular and spherical domains. Moreover, this makes no assumptions about the neural network weights (nor domain size), but rather, only the input size, width, and depth. Thus far, we have considered forming a generative model G: R k → R n capable of producing k-group-sparse signals, which leads to a lower bound of m = Ω(k log n). While this precise approach does not appear to be suited to properly understanding the dependence on width and depth in Corollary 2, we now show that a simple variant indeed suffices: We form a wide and/or deep ReLU network G: R k → R n capable of producing all (kk 0)-group-sparse signals having non-zero entries ±ξ, where k 0 is a certain positive integer that may be much larger than one. The idea of the construction is illustrated in Figure 2, which shows the mappings for k = 1 (the general case simply repeats this structure in parallel to get an output dimension n = n 0 k). Note also that we need to replace the rectangular shapes by trapeziums (with high-gradient diagonals) to make them implementable with a ReLU network. Again using the minimax lower bound for group-sparse recovery and a suitable choice of ξ, the following is proved in. Theorem 2. Fix C 1, C A > 0, and consider the problem of compressive sensing with generative models under i.i.d. N 0, α m noise, a measurement matrix A ∈ R m×n satisfying A 2 F = C A n, and the above-described generative model G: R k → R n with parameters k, k 0, n 0, and ξ. Then, if n 0 ≥ C 0 k 0 for an absolute constant C 0, then there exists a constant C 2 = Θ such that the choice ξ = C2α k yields the following: • Any algorithm producing somex satisfying sup x * ∈Range(G) E x − x * 2 2 ≤ C 1 α must also have m = Ω kk 0 log n kk0 (or equivalently m = Ω kk 0 log n0 k0, since n = n 0 k). • The generative function G can be implemented as a ReLU network with a single hidden layer (i.e., d = 2) of width at most w = O(k( n0 k0) k0 ). • Alternatively, if n0 k0 is an integer power of two, the generative function G can be implemented as a ReLU network with depth d = O k 0 log n0 k0 and width w = O(n). In the settings described in the second and third dot points, the sample complexity from Corollary 2 behaves as kd log w = O kk 0 log n0 k0 + k log k and kd log w = O kk 0 · log n0 k0 · log n respectively. While we do not claim a lower bound for every possible combination of depth and width, the final statement of Theorem 2 reveals that the upper and lower bounds match up to a constant factor (high-width case with log k ≤ O k 0 log n0 k0) or up to a log n factor (high-depth case). The proof of the claim for the high-width case is based on the fact that in Figure 2, upon replacing the rectangles by trapeziums, each mapping is piecewise linear, and at the -th scale the number of pieces is O n0 k0 −1, which we sum over = 1,..., k 0 to get the overall width. In the high-depth case, we exploit the periodic nature of the signals in Figure 2, and use the fact that depth-d neural networks can be used to produce periodic signals with O(2 d) repetitions. In our case, the maximum number of repetitions is O n0 k0 k0 (at the finest scale in Figure 2). We have established, to our knowledge, the first lower bounds on the sample complexity for compressive sensing with generative models. To achieve these, we constructed generative models capable of producing group-sparse signals, and then applied a minimax lower bound for group-sparse recovery. For bounded Lipschitz-continuous generative models we matched the O(k log L) scaling law derived in, and for ReLU-based generative models, we showed that the dependence of the O(kd log w) bound from has an optimal or near-optimal dependence on both the width and depth. A possible direction for future research is to understand what additional assumptions could be placed on the generative model to further reduce the sample complexity. | We establish that the scaling laws derived in (Bora et al., 2017) are optimal or near-optimal in the absence of further assumptions. | 624 | scitldr |
Discovering and exploiting the causal structure in the environment is a crucial challenge for intelligent agents. Here we explore whether modern deep reinforcement learning can be used to train agents to perform causal reasoning. We adopt a meta-learning approach, where the agent learns a policy for conducting experiments via causal interventions, in order to support a subsequent task which rewards making accurate causal inferences. We also found the agent could make sophisticated counterfactual predictions, as well as learn to draw causal inferences from purely observational data. Though powerful formalisms for causal reasoning have been developed, applying them in real-world domains can be difficult because fitting to large amounts of high dimensional data often requires making idealized assumptions. Our suggest that causal reasoning in complex settings may benefit from powerful learning-based approaches. More generally, this work may offer new strategies for structured exploration in reinforcement learning, by providing agents with the ability to perform—and interpret—experiments. Many machine learning algorithms are rooted in discovering patterns of correlation in data. While this has been sufficient to excel in several areas BID20 BID7, sometimes the problems we are interested in are fundamentally causal. Answering questions such as "Does smoking cause cancer?" or "Was this person denied a job due to racial discrimination?" or "Did this marketing campaign cause sales to go up?" all require an ability to reason about causes and effects and cannot be achieved by purely associative inference. Even for problems that are not obviously causal, like image classification, it has been suggested that some failure modes emerge from lack of causal understanding. Causal reasoning may be an essential component of natural intelligence and is present in human babies, rats and even birds BID23 BID14 BID15 BID5 BID21. There is a rich literature on formal approaches for defining and performing causal reasoning BID29 BID33 BID8 BID30 ).Here we investigate whether procedures for learning and using causal structure can be produced by meta-learning. The approach of meta-learning is to learn the learning (or inference) procedure itself, directly from data. We adopt the specific method of BID9 and BID35, training a recurrent neural network (RNN) through model-free reinforcement learning. We train on a large family of tasks, each underpinned by a different causal structure. The use of meta-learning avoids the need to manually implement explicit causal reasoning methods in an algorithm, offers advantages of scalability by amortizing computations, and allows automatic incorporation of complex prior knowledge BID1 BID35 BID11. Additionally, by learning end-to-end, the algorithm has the potential to find the internal representations of causal structure best suited for the types of causal inference required. This work probed how an agent could learn to perform causal reasoning in three distinct settingsobservational, interventional, and counterfactual -corresponding to different types of data available to the agent during the first phase of an episode. In the observational setting (Experiment 1), the agent could only obtain passive observations from the environment. This type of data allows an agent to infer associations (associative reasoning) and, when the structure of the underlying causal model permits it, to estimate the effect that changing a variable in the environment has on another variable, namely to estimate causal effects (cause-effect reasoning).In the interventional setting (Experiment 2), the agent could directly set the values of some variables in the environment. This type of data in principle allows an agent to estimate causal effects for any underlying causal model. In the counterfactual setting (Experiment 3), the agent first had an opportunity to learn about the causal graph through interventions. At the last step of the episode, it was asked a counterfactual question of the form "What would have happened if a different intervention had been made in the previous time-step?".Next we will formalize these three settings and patterns of reasoning possible in each, using the graphical model framework BID29 BID33 BID8 1, and introduce the meta-learning methods that we will use to train agents that are capable of such reasoning. Causal relationships among random variables can be expressed using causal directed acyclic graphs (DAGs) (see Appendix). A causal DAG is a graphical model that captures both independence and causal relations. Each node X i corresponds to a random variable, and the joint distribution p(X 1, ..., X N) is given by the product of conditional distributions of each node X i given its parent nodes pa(DISPLAYFORM0 Edges carry causal semantics: if there exists a directed path from X i to X j, then X i is a potential cause of X j . Directed paths are also called causal paths. The causal effect of X i on X j is the conditional distribution of X j given X i restricted to only causal paths. DISPLAYFORM1 An example causal DAG G is given in the figure on the left, where E represents hours of exercise in a week, H cardiac health, and A age. The causal effect of E on H is the conditional distribution restricted to the path E → H, i.e. excluding the path E ← A → H. The variable A is called a confounder, as it confounds the causal effect with non-causal statistical influence. Simply observing cardiac health conditioning on exercise level from p(H|E) (associative reasoning) cannot answer if change in exercise levels cause changes in cardiac health (cause-effect reasoning), since there is always the possibility that correlation between the two is because of the common confounder of age. Cause-effect Reasoning. The causal effect can be seen as the conditional distribution p →E=e (H|E = e) 2 on the graph G →E=e above (right), ing from intervening on E by replacing p(E|A) with a delta distribution δ E=e (thereby removing the link from A to E) and leaving the remaining conditional distributions p(H|E,A) and p(A) unaltered. The rules of do-calculus BID29 BID30 tell us how to compute p →E=e (H|E = e) using observations from G. In this case p →E=e (H|E = e) = Counterfactual Reasoning. Cause-effect reasoning can be used to correctly answer predictive questions of the type "Does exercising improve cardiac health?" by accounting for causal structure and confounding. However, it cannot answer retrospective questions about what would have happened. For example, given an individual i who has died of a heart attack, this method would not be able to answer questions of the type "What would the cardiac health of this individual have been had they done more exercise?". This type of question requires estimating unobserved sources of noise and then reasoning about the effects of this noise under a graph conditioned on a different intervention. Meta-learning refers to a broad range of approaches in which aspects of the learning algorithm itself are learned from the data. Many individual components of deep learning algorithms have been successfully meta-learned, including the optimizer BID1, initial parameter settings BID11, a metric space BID34, and use of external memory BID31.Following the approach of BID9 BID35, we parameterize the entire learning algorithm as a recurrent neural network (RNN), and we train the weights of the RNN with model-free reinforcement learning (RL). The RNN is trained on a broad distribution of problems which each require learning. When trained in this way, the RNN is able to implement a learning algorithm capable of efficiently solving novel learning problems in or near the training distribution. Learning the weights of the RNN by model-free RL can be thought of as the "outer loop" of learning. The outer loop shapes the weights of the RNN into an "inner loop" learning algorithm. This inner loop algorithm plays out in the activation dynamics of the RNN and can continue learning even when the weights of the network are frozen. The inner loop algorithm can also have very different properties from the outer loop algorithm used to train it. For example, in previous work this approach was used to negotiate the exploration-exploitation tradeoff in multi-armed bandits BID9 and learn algorithms which dynamically adjust their own learning rates BID35. In the present work we explore the possibility of obtaining a causally-aware inner-loop learning algorithm. See the Appendix for a more formal approach to meta-learning. In the experiments, in each episode the agent interacted with a different causal DAG G. G was drawn randomly from the space of possible DAGs under the constraints given in the next paragraph. Each episode consisted of T steps, and was divided into two phases: information and quiz. The information phase, corresponding to the first T −1 steps, allowed the agent to collect information by interacting with or passively observing samples from G. The agent could potentially use this information to infer the connectivity and weights of G. The quiz phase, corresponding to the final step T, required the agent to exploit the causal knowledge it collected in the information phase, to select the node with the highest value under a random external intervention. Causal graphs, observations, and actions. We generated all graphs on N =5 nodes, with edges only in the upper triangular of the adjacency matrix (this guarantees that all the graphs obtained are DAGs), with edge weights, w ji ∈{−1,0,1} (uniformly sampled), and removed 300 for held-out testing. The remaining 58749 (or 3 N(N−1)/2 − 300) were used as the training set. Each node's value, X i ∈ R, was Gaussiandistributed. The values of parentless nodes were drawn from N (µ = 0.0,σ = 0.1). The conditional probability of a node with parents was p(X i |pa(X i)) = N (µ = j w ji X j,σ = 0.1), where pa(X i) represents the parents of node X i in G. The values of the 4 observable nodes (the root node, was always hidden), were concatenated to create v t and provided to the agent in its observation vector, o t =[v t,m t], where m t is a one-hot vector indicating external intervention during the quiz phase (explained below).Information phase. In the information phase, an information action, a t, caused an intervention on the a t -th node, setting its value to X at = 5. We choose an intervention value outside the likely range of sampled observations, to facilitate learning of the causal graph. The observation from the intervened graph, G →Xa t =5, was sampled similarly to G, except the incoming edges to X at were severed, and its intervened value was used for conditioning its children's values. The node values in G →Xa t =5 were distributed as p →Xi=5 (X 1:N\i |X i = 5). If a quiz action was chosen during the information phase, it was ignored, the G values were sampled as if no intervention had been made, and the agent was given a penalty of r t =−5 in order to encourage it to take quiz actions at only during quiz phase. After the action was selected, an observation was provided to the agent. The default length of this phase was fixed to T = N = 5 since in the noise-free limit, a minimum of T −1=4 interventions are required in general to resolve the causal structure, and score perfectly on the test phase. Quiz phase. In the quiz phase, one non-hidden node was selected at random to be intervened on externally, X j, and its value was set to −5. We chose an intervention value of −5 never previously observed by the agent in that episode, thus disallowing the agent from memorizing the of interventions in the information phase to perform well on the quiz phase. The agent was informed of this by the observed m T −1 (a one-hot vector which indicated which node would be intervened on), from the final pre-quiz phase time-step, T −1. Note, m t was set to a zero-vector for steps t < T −1. A quiz action, a T, chosen by the agent indicated the node whose value would be given to the agent as a reward. In other words, the agent would receive reward, r T =X a T −(N−1). Again, if a quiz action was chosen during the information phase, the node values were not sampled and the agent was simply given a penalty of r T =−5.Active vs passive agents. Our agents had to perform two distinct tasks during the information phase: a) actively choose which nodes to set values on, and b) infer the causal DAG from its observations. We refer to this setup as the "active" condition. To control for (a), we created the "passive" condition, where the agent's information phase actions are not learned. To provide a benchmark for how well the active agent can perform task (a), we fixed the passive agent's intervention policy to be an exhaustive sweep through all observable nodes. This is close to optimal for this domain -in fact it is the optimal policy for noise-free conditional node values. We also compared the active agent's performance to a baseline agent whose policy is to intervene randomly on the observable nodes in the information phase, in the Appendix. Two kinds of learning The "inner loop" of learning (see Section 2.2) occurs within each episode where the agent is learning from the evidence it gathers during the information phase in order to perform well in the quiz phase. The same agent then enters a new episode, where it has to repeat the task on a different DAG. Test performance is reported on DAGs that the agent has never previously seen, after all the weights of the RNN have been fixed. Hence, the only transfer from training to test (or the "outer loop" of learning) is the ability to discover causal dependencies based on observations in the information phase, and to perform causal inference in the quiz phase. We used a long short-term memory (LSTM) network BID18 ) (with 96 hidden units) that, at each time-step t, receives a concatenated vector containing [o t,a t−1,r t−1] as input, where o t is the observation 5, a t−1 is the previous action (as a one-hot vector) and r t−1 the reward (as a single real-value) 6. The outputs, calculated as linear projections of the LSTM's hidden state, are a set of policy logits (with dimensionality equal to the number of available actions), plus a scalar baseline. The policy logits are transformed by a softmax function, and then sampled to give a selected action. Learning was by asynchronous advantage actor-critic BID24. In this framework, the loss function consists of three terms -the policy gradient, the baseline cost and an entropy cost. The baseline cost was weighted by 0.05 relative to the policy gradient cost. The weighting of the entropy cost was annealed over the course of training from 0.05 to 0. Optimization was done by RMSProp with =10 −5, momentum = 0.9 and decay = 0.95. Learning rate was annealed from 3×10 −6 to 0. For all experiments, after training, the agent was tested with the learning rate set to zero, on a held-out test set. Our three experiments (observational, interventional, and counterfactual) differed in the properties of the v t that was observed by the agent during the information phase, and thereby limited the extent of causal reasoning possible within each data setting. Our measure of performance is the reward earned in the quiz phase for held-out DAGs. Choosing a random node node in the quiz phase in a reward of −5/4=−1.25, since one node (the externally intervened node) always has value −5 and the others have on average 0 value. By learning to simply avoid the externally intervened node, the agent can earn on average 0 reward. Consistently picking the node with the highest value in the quiz phase requires the agent to perform causal reasoning. For each agent, we take the average reward earned across 1200 episodes (300 held-out test DAGs, with 4 possible external interventions). We train 12 copies of each agent and report the average reward earned by these, with error bars showing 95% confidence intervals. In Experiment 1, the agent could neither intervene to set the value of variables in the environment, nor observe any external interventions. In other words, it only received observations from G, not G →Xj (where X j is a node that has been intervened on). This limits the extent of causal inference possible. In this experiment, we tested six agents, four of which were learned: "Observational", "Long Observational", "Active Conditional", "Passive Conditional", "Observational MAP Baseline"(not learned) and the "Optimal Associative Baseline" (not learned). We also ran two other standard RL baselines-see the Appendix for details. Observational Agents: In the information phase, the actions of the agent were ignored 7, and the observational agent always received the values of the observable nodes as sampled from the joint distribution associated with G. In addition to the default T =5 episode length, we also trained this agent with 4× longer episode length (Long Observational Agent), to measure performance increase with more observational data. Conditional Agents: The information phase actions corresponded to observing a world in which the selected node X j is equal to X j =5, and the remaining nodes are sampled from the conditional distribution p(X 1:N\j |X j =5), where X 1:N\j indicates the set of all nodes except X j. This differs from intervening on the variable X j by setting it to the value X j =5, since here we take a conditional sample from G rather than from G →Xj=5 (i.e. from p →Xj=5 (X 1:N\j |X j = 5)), and inference about the corresponding node's parents is possible. Therefore, this agent still has access to only observational data, as with the observational agents. However, on average it receives more diagnostic information about the relation between the random variables in G, since it can observe samples where a node takes a value far outside the likely range of sampled observations. We run active and passive versions of this agent as described in Section 3 Optimal Associative Baseline: This baseline receives the true joint distribution p(X 1:N) implied by the DAG in that episode, therefore it has full knowledge of the correlation structure of the environment 8. It can therefore do exact associative reasoning of the form p(X j |X i = x), but cannot do any cause-effect reasoning of the form p →Xi=x (X j |X i = x). In the quiz phase, this baseline chooses the node that has the maximum value according to the true p(X j |X i =x) in that episode, where X i is the node externally intervened upon, and x=−5.Observational MAP Baseline: This baseline follows the traditional method of separating causal induction and causal inference. We first carry out exact maximum a posteriori (MAP) inference over the space of DAGs in each episode (i.e. causal induction) by selecting the DAG (G MAP) of the 59049 unique possibilities that maximizes the likelihood of the data observed, v 1:T, by the Observational Agent in that episode. This is equivalent to maximizing the posterior probability since the prior over graphs is uniform. We focus on three key questions in this experiment: (i) Can our agents learn to do associative reasoning with observational data?, (ii) Can they learn to do cause-effect reasoning from observational data?, and (iii) In addition to making causal inferences, can our agent also choose good actions in the information phase to generate the data it observes? For (i), we see that the Observational Agents achieve reward above the random baseline (see the Appendix), and that more observations (Long Observational Agent) lead to better performance FIG0, indicating that the agent is indeed learning the statistical dependencies between the nodes. We see that the performance of the Passive-Conditional Agent is better than either of the Observational Agents, since the data it observes is very informative about the statistical dependencies in the environment. Finally, we see that the PassiveConditional Agent's performance is comparable (in fact surpasses as discussed below) the performance of the Optimal Associative Baseline, indicating that it is able to do perfect associative inference. For (ii), we see the crucial that the Passive-Conditional Agent's performance is significantly above the Optimal Associative Baseline, i.e. it performs better than what is possible using only correlations. We compare their performances, split by whether or the node that was intervened on in the quiz phase of the episode has a parent FIG0. If the intervened node X j has no parents, then G =G →Xj, and there is no advantage to being able to do cause-effect reasoning. We see indeed that the Passive-Conditional agent performs better than the Optimal Associative Baseline only when the intervened node has parents (denoted by hatched bars in FIG0), indicating that this agent is able to carry out some cause-effect reasoning, despite access to only observational data -i.e. it learns some form of do-calculus. We show the quiz phase for an example test DAG in FIG0, seeing that the Optimal Associative Baseline chooses according to the node values predicted by G whereas the Passive-Conditional Agent chooses according the node values predicted by G →Xj.For (iii), we see FIG0 that the Active-Conditional Agent's performance is only marginally below the performance of the Passive-Conditional Agent, indicating that when the agent is allowed to choose its actions, it makes reasonable choices that allow good performance. In Experiment 2, the agent receives interventional data in the information phase -it can choose to intervene on any observable node, X j, and observe a sample from the ing graph G →Xj. As discussed in Section 2.1, access to intervention data permits cause-effect reasoning even in the presence of unobserved confounders, a feat which is in general impossible with access only to observational data. In this experiment, we test four new agents, two of which were learned: "Active Interventional", "Passive Interventional", "Interventional MAP Baseline"(not learned), and "Optimal Cause-Effect Baseline" (not learned).Interventional Agents: The information phase actions correspond to performing an intervention on the selected node X j and sampling from G →Xj (see Section 3 for details). We run active and passive versions of this agent as described in Section 3. We see that the Passive-Int. Agent's choice is consistent with maximizing on these (correct) node value.each node according to G MAP →Xj where X j is the node externally intervened upon (i.e. causal inference), and choose the node with the highest value. Optimal Cause-Effect Baseline: This baseline receives the true DAG, G. In the quiz phase, it chooses the node that has the maximum value according to G →Xj, where X j is the node externally intervened upon. We focus on three key questions in this experiment: (i) Can our agents learn to do cause-effect reasoning from interventional data?, (ii) How does the cause-effect reasoning in our agents which have access to interventional data differ from the cause-effect reasoning measured in Experiment 1 (in agents that have access only to observational data)? (iii) In addition to making causal inferences, can our agent also choose good actions in the information phase to generate the data it observes?For (i) we see in FIG2 that the Passive-Interventional Agent's performance is comparable to the Optimal Cause-Effect Baseline, indicating that it is able to do close to perfect cause-effect reasoning in this domain. For (ii) we see in FIG2 the crucial that the Passive-Interventional Agent's performance is significantly better than the Passive-Conditional Agent. We compare the performances of these two agents, split by whether the node that was intervened on in the quiz phase of the episode had unobserved confounders with other variables in the graph FIG2. In confounded cases, as described in Section 2.1, cause-effect reasoning is impossible with only observational data. We see that the performance of the Passive-Interventional Agent does not vary significantly with confoundedness, whereas the performance of the Passive-Conditional Agent is significantly lower in the confounded cases. This indicates that the improvement in the performance of the agent that has access to interventional data (as compared to the agents that had access to only observational data) is largely driven by its ability to also do cause-effect reasoning in the presence of confounders. This is highlighted by FIG2, which shows the quiz phase for an example DAG, where the Passive-Conditional agent is unable to resolve the confounder, but the Passive-Interventional agent can. For (iii), we see in FIG3 that the Active-Interventional Agent's performance is only marginally below the performance of the near optimal Passive-Interventional Agent, indicating that when the agent is allowed to choose its actions, it makes reasonable choices that allow good performance. In Experiment 3, the agent was again allowed to make interventions as in Experiment 2, but in this case the quiz phase task entailed answering a counterfactual question. We explain here what a counterfactual question in this domain looks like. Consider the conditional distribution p(X i |pa(X i))=N (j w ji X j,0.1) as described in Section 3 as X i = j w ji X j + where is distributed as N (0.0,0.1), and represents the specific randomness introduced when taking one sample from the DAG. After observing the nodes FIG0 for a legend. Here, the left panel shows G →Xj=−5 and the nodes taking the mean values prescribed by p →Xj=−5 (X 1:N\j |X j =−5). We see that the Passive-Int. Agent's choice is consistent with maximizing on these node values, where it makes a random choice between two nodes with the same value. The right panel panel shows G →Xj=−5 and the nodes taking the exact values prescribed by the means of p →Xj=−5 (X 1:N\j |X j =−5), combined with the specific randomness inferred from the previous time step. As a of accounting for the randomness, the two previously degenerate maximum values are now distinct. We see that the Passive-CF. agent's choice is consistent with maximizing on these node values.in the DAG in one sample, we can infer this specific randomness i for each node X i (i.e. abduction as described in the Appendix) and answer counterfactual questions like "What would the values of the nodes be, had X j in that particular sample taken on a different value than what we observed?", for any of the nodes X j. We test 2 new learned agents: "Active Counterfactual" and "Passive Counterfactual".Counterfactual Agents: This agent is exactly analogous to the Interventional agent, with the addition that the exogenous noise in the last information phase step t=T −1 (where say X p =+5), is stored and the same noise is used in the quiz phase step t=T (where say X f =−5). While the question our agents have had to answer correctly so far in order to maximize their reward in the quiz phase was "Which of the nodes X 1:N\j will have the highest value when X f is set to −5? ", in this setting, we ask "Which of the nodes X 1:N\j would have had the highest value in the last step of the information phase, if instead of having X p =+5, we had X f =−5? ". We run active and passive versions of this agent as described in Section 3.Optimal Counterfactual Baseline: This baseline receives the true DAG and does exact abduction based on the exogenous noise observed in the penultimate step of the information phase, and combines this correctly with the appropriate interventional inference on the true DAG in the quiz phase. We focus on two key questions in this experiment: (i) Can our agents learn to do counterfactual reasoning?, (ii) In addition to making causal inferences, can our agent also choose good actions in the information phase to generate the data it observes?For (i), we see that the Passive-Counterfactual Agent achieves higher reward than the Passive-Interventional Agent and the Optimal Cause-Effect Baseline. To evaluate whether this difference from the agent's use of abduction (see the Appendix for details), we split the test set into two groups, depending on whether or not the decision for which node will have the highest value in the quiz phase is affected by exogenous noise, i.e. whether or not the node with the maximum value in the quiz phase changes if the noise is resampled. This is most prevalent in cases where the maximum expected reward is degenerate, i.e. where several nodes give the same maximum reward (denoted by hatched bars in FIG4). Here, agents with no access to the noise have no basis for choosing one over the other, but different noise samples can give rise to significant differences in the actual values that these degenerate nodes have. We see indeed that there is no difference in the rewards received by the Passive-Counterfactual and Passive-Interventional Agents in the cases where the maximum values are distinct, however the Passive-Counterfactual Agent significantly outperforms the Passive-Interventional Agent in cases where there are degenerate maximum values. For (ii), we see in FIG5 that the Active-Counterfactual Agent's performance is only marginally below the performance of the Passive-Counterfactual agent, indicating that when the agent is allowed to choose its actions, it makes reasonable choices that allow good performance. We introduced and tested a framework for learning causal reasoning in various data settings-observational, interventional, and counterfactual-using deep meta-RL. Crucially, our approach did not require explicit encoding of formal principles of causal inference. Rather, by optimizing an agent to perform a task that depended on causal structure, the agent learned implicit strategies to use the available data for causal reasoning, including drawing inferences from passive observation, actively intervening, and making counterfactual predictions. Below, we summarize the keys from each of the three experiments. In Section 4.1 and FIG0 show that the agent learns to perform do-calculus. In FIG0 we see that, compared to the highest possible reward achievable without causal knowledge, the trained agent received more reward. This observation is corroborated by FIG0 which shows that performance increased selectively in cases where do-calculus made a prediction distinguishable from the predictions based on correlations. These are situations where the externally intervened node had a parent -meaning that the intervention ed in a different graph. In Section 4.2 and FIG2, we show that the agent learns to resolve unobserved confounders using interventions (a feat impossible with only observational data). In FIG2 we see that the agent with access to interventional data performs better than an agent with access to only observational data. FIG2 shows that the performance increase is greater in cases where the intervened node shared an unobserved parent (a confounder) with other variables in the graph. In this section we also compare the agent's performance to a MAP estimate of the causal structure and find that the agent's performance matches it, indicating that the agent is indeed doing close to optimal causal inference. In Section 4.3 and FIG4, we show that the agent learns to use counterfactuals. In FIG4 we see that the agent with additional access to the specific randomness in the test phase performs better than an agent with access to only interventional data. In FIG4, we find that the increased performance is observed only in cases where the maximum mean value in the graph is degenerate, and optimal choice is affected by the exogenous noise -i.e. where multiple nodes have the same value on average and the specific randomness can be used to distinguish their actual values in that specific case. This work is the first demonstration that causal reasoning can arise out of model-free reinforcement learning. This opens up the possibility of leveraging powerful learning-based methods for causal inference in complex settings. Traditional formal approaches usually decouple the two problems of causal induction (i.e. inferring the structure of the underlying model) and causal inference (i.e. estimating causal effects and answering counterfactual questions), and despite advances in both BID26 BID6 BID27 BID32 BID12 BID22, inducing models often requires assumptions that are difficult to fit to complex real-world conditions. By learning these end-to-end, our method can potentially find representations of causal structure best tuned to the specific causal inferences required. Another key advantage of our meta-RL approach is that it allows the agent to learn to interact with the environment in order to acquire necessary observations in the service of its task-i.e. to perform active learning. In our experimental domain, our agents' active intervention policy was close to optimal, which demonstrates the promise of agents that can learn to experiment on their environment and perform rich causal reasoning on the observations. Future work should explore agents that perform experiments to support structured exploration in RL, and optimal experiment design in complex domains where large numbers of blind interventions are prohibitive. To this end, follow-up work should focus on scaling up our approach to larger environments, with more complex causal structure and a more diverse range of tasks. Though the here are a first step in this direction which use relatively standard deep RL components, our approach will likely benefit from more advanced architectures (e.g. BID16 BID17 that allow longer more complex episodes, as well as models which are more explicitly compositional (e.g. BID3 BID0 or have richer semantics (e.g. BID13, that more explicitly leverage symmetries like equivalance classes in the environment. We can also compare the performance of these agents to two standard model-free RL baselines. The Q-total agent learns a Q-value for each action across all steps for all the episodes. The Q-episode agent learns a Q-value for each action conditioned on the input at each time step [o t,a t−1,r t−1], but with no LSTM memory to store previous actions and observations. Since the relationship between action and reward is random between episodes, Q-total was equivalent to selecting actions randomly, ing in a considerably negative reward. The Q-episode agent essentially makes sure to not choose the arm that is indicated by m t to be the external intervention (which is assured to be equal to −5), and essentially chooses randomly otherwise, giving an average reward of 0. Consider a distribution D over Markov Decision Processes (MDPs). We train an agent with memory (in our case an RNN-based agent) on this distribution. In each episode, we sample a task m ∼ D. At each step t within an episode, the agent sees an observation o t, executes an action a t, and receives a reward r t. Both a t−1 and r t−1 are given as additional inputs to the network. Thus, via the recurrence of the network, each action is a function of the entire trajectory H t = {o 0,a 0,r 0,...,o t−1,a t−1,r t−1,o t} of the episode. Because this function is parameterized by the neural network, its complexity is limited only by the size of the network. BID30's "abduction-action-prediction" method prescribes one method for answering counterfactual queries, by estimating the specific unobserved makeup of individual i and by transferring it to the counterfactual world. Assume, for example, the following model for G of Section 2.1: E = w AE A+η, H =w AH A+w EH E+, where the weights w ij represent the known causal effects in G and and η are terms of (e.g.) Gaussian noise that represent the unobserved randomness in the makeup of each individual 9. Suppose that for individual i we observe: A = a i, E = e i, H = h i. We can answer the counterfactual question of "What if individual i had done more exercise, i.e. E =e, instead?" by: a) Abduction: estimate the individual's specific makeup with i =h i −w AH a i −w EH e i, b) Action: set E to more exercise e, c) Prediction: predict a new value for cardiac health as h =w AH a i +w EH e + i. Observational. The purview of the previous experiments was to show a proof of concept on a simple tractable system, demonstrating that causal induction and inference can be learned and implemented via a meta-learned agent. In this experiment, we generalize some of the to nonlinear, non-Gaussian causal graphs which are more typical of real-world causal graphs and to demonstrate that our hold without loss of generality on such systems. Here we investigate causal DAGs with a quadratic dependence on the parents by changing the conditional distribution to p(X i |pa( DISPLAYFORM0 Here, although each node is normally distributed given its parents, the joint distribution is not multivariate Gaussian due to the non-linearity in how the means are determined. We find that the Long-Observational achieves more reward than the Observational agent indicating that the agent is in fact learning the statistical dependencies between the nodes, within an episode. We also find that although the Active-Interventional agent is not far behind the performance of the MAP baseline, and achieves reward well above the Long-Observational 10 The fact that the MAP baseline gets so close to the Optimal Cause-Effect baseline indicates that the Active agent is choosing close to optimal actions. In the experiments reported in the main paper, the test set was a random subset of all graphs, and training examples were generated randomly subject to the constraint that they not be in the test set. However, this raised the possibility that any test graph might have an equivalent graph in the training set, which could in a type of overfitting. We therefore ran a new set of experiments where the entire equivalence class of each test graph was held out from the training set 11 . Performance on the test set therefore indicates generalization of the inference procedures learned to previously unseen equivalence classes of causal DAGs. For these experiments, we used graphs with N =6 nodes, because 5-node graphs have too few equivalence classes to partition in this way. All other details were the same as in the main paper. We see in FIG9 that the agents learn to generalize well to these held out examples, and we find the same pattern of behavior noted in the main text where the rewards earned are ordered such that Observational agent < Passive-Conditional agent < Passive-Interventional agent < Passive-Counterfactual agent. We see additionally in FIG9 that the Active-Interventional agent performs at par with the Passive-Interventional agent (which is allowed to see the of interventions on all nodes) and significantly better than an additional baseline we use here of the Random-Interventional agent whose information phase policy is to intervene on nodes at random, indicating that the intervention policy learned by the Active agent is good. Graphical models BID28 BID4 BID19 BID2 BID25 are a marriage between graph and probability theory that allows to graphically represent and assess statistical dependence. In the following sections, we give some basic definitions and describe a method (d-separation) for graphically assessing statistical independence in belief networks. Figure 10: (a): Directed acyclic graph. The node X 3 is a collider on the path X 1 →X 3 ←X 2 and a non-collider on the path X 2 →X 3 →X 4. (b): Cyclic graph obtained from (a) by adding a link from X 4 to X 1.A graph is a collection of nodes and links connecting pairs of nodes. The links may be directed or undirected, giving rise to directed or undirected graphs respectively. A path from node X i to node X j is a sequence of linked nodes starting at X i and ending at X j. A directed path is a path whose links are directed and pointing from preceding towards following nodes in the sequence. 10 The conditional distribution p(X 1:N\j |Xj =5), and therefore Conditional agents, were non-trivial to calculate for the quadratic case.11 The hidden node was guaranteed to be a root node by rejecting all DAGs where the hidden node has parents A directed acyclic graph (DAG) is a directed graph with no directed paths starting and ending at the same node. For example, the directed graph in FIG1 is acyclic. The addition of a link from X 4 to X 1 gives rise to a cyclic graph (FIG1).A node X i with a directed link to X j is called parent of X j. In this case, X j is called child of X i.A node is a collider on a specified path if it has (at least) two parents on that path. Notice that a node can be a collider on a path and a non-collider on another path. For example, in FIG1 (a) X 3 is a collider on the path X 1 →X 3 ←X 2 and a non-collider on the path X 2 →X 3 →X 4.A node X i is an ancestor of a node X j if there exists a directed path from X i to X j. In this case, X j is a descendant of X i.A graphical model is a graph in which nodes represent random variables and links express statistical relationships between the variables. A belief network is a directed acyclic graphical model in which each node X i is associated with the conditional distribution p(X i |pa(X i)), where pa(X i) indicates the parents of X i. The joint distribution of all nodes in the graph, p(X 1:N), is given by the product of all conditional distributions, i.e. DISPLAYFORM0 p(X i |pa(X i)). Given the sets of random variables X,Y and Z, X and Y are statistically independent given Z (X ⊥ ⊥Y|Z) if all paths from any element of X to any element of Y are closed (or blocked). A path is closed if at least one of the following conditions is satisfied:(Ia) There is a non-collider on the path which belongs to the conditioning set Z.(Ib) There is a collider on the path such that neither the collider nor any of its descendants belong to the conditioning set Z. | meta-learn a learning algorithm capable of causal reasoning | 625 | scitldr |
Sentiment classification is an active research area with several applications including analysis of political opinions, classifying comments, movie reviews, news reviews and product reviews. To employ rule based sentiment classification, we require sentiment lexicons. However, manual construction of sentiment lexicon is time consuming and costly for resource-limited languages. To bypass manual development time and costs, we tried to build Amharic Sentiment Lexicons relying on corpus based approach. The intention of this approach is to handle sentiment terms specific to Amharic language from Amharic Corpus. Small set of seed terms are manually prepared from three parts of speech such as noun, adjective and verb. We developed algorithms for constructing Amharic sentiment lexicons automatically from Amharic news corpus. Corpus based approach is proposed relying on the word co-occurrence distributional embedding including frequency based embedding (i.e. Positive Point-wise Mutual Information PPMI). First we build word-context unigram frequency count matrix and transform it to point-wise mutual Information matrix. Using this matrix, we computed the cosine distance of mean vector of seed lists and each word in the corpus vocabulary. Based on the threshold value, the top closest words to the mean vector of seed list are added to the lexicon. Then the mean vector of the new sentiment seed list is updated and process is repeated until we get sufficient terms in the lexicon. Using PPMI with threshold value of 100 and 200, we got corpus based Amharic Sentiment lexicons of size 1811 and 3794 respectively by expanding 519 seeds. Finally, the lexicon generated in corpus based approach is evaluated. Most of sentiment mining research papers are associated to English languages. Linguistic computational resources in languages other than English are limited. Amharic is one of resource limited languages. Due to the advancement of World Wide Web, Amharic opinionated texts is increasing in size. To manage prediction of sentiment orientation towards a particular object or service is crucial for business intelligence, government intelligence, market intelligence, or support decision making. For carrying out Amharic sentiment classification, the availability of sentiment lexicons is crucial. To-date, there are two generated Amharic sentiment lexicons. These are manually generated lexicon and dictionary based Amharic SWN and SOCAL lexicons . However, dictionary based generated lexicons has short-comings in that it has difficulty in capturing cultural connotation and language specific features of the language. For example, Amharic words which are spoken culturally and used to express opinions will not be obtained from dictionary based sentiment lexicons. The word ጉርሻ/"feed in other people with hands which expresses love and live in harmony with others"/ in the Amharic text: "እንደ ጉርሻ ግን የሚያግባባን የለም... ጉርሻ እኮ አንዱ ለሌላው የማጉረስ ተግባር ብቻ አይደለም፤ በተጠቀለለው እንጀራ ውስጥ ፍቅር አለ፣ መተሳሰብ አለ፣ አክብሮት አለ።" has positive connotation or positive sentiment. But the dictionary meaning of the word ጉርሻ is "bonus". This is far away from the cultural connotation that it is intended to represent and express. We assumed that such kind of culture (or language specific) words are found in a collection of Amharic texts. However, dictionary based lexicons has short comings to capture sentiment terms which has strong ties to language and culture specific connotations of Amharic. Thus, this work builds corpus based algorithm to handle language and culture specific words in the lexicons. However, it could probably be impossible to handle all the words in the language as the corpus is a limited resource in almost all less resourced languages like Amharic. But still it is possible to build sentiment lexicons in particular domain where large amount of Amharic corpus is available. Due to this reason, the lexicon built using this approach is usually used for lexicon based sentiment analysis in the same domain from which it is built. The research questions to be addressed utilizing this approach are: How can we build an approach to generate Amharic Sentiment Lexicon from corpus?How do we evaluate the validity and quality of the generated lexicon? In this work, we set this approach to build Amharic polarity lexicons in automatic way relying on Amharic corpora which is mentioned shortly. The corpora are collected from different local news media organizations and also from facebook news' comments and you tube video comments to extend and enhance corpus size to capture sentiment terms into the generated PPMI based lexicon. In this part, we will present the key papers addressing corpus-based Sentiment Lexicon generation. In , large polarity lexicon is developed semiautomatically from the web by applying graph propagation method. A set of positive and negative sentences are prepared from the web for providing clue to expansion of lexicon. The method assigns a higher positive value if a given seed phrase contains multiple positive seed words, otherwise it is assigned negative value. The polarity p of seed phrase i is given by: where β is the factor that is responsible for preserving the overall semantic orientations between positive and negative flow over the graph. Both quantitatively and qualitatively, the performance of the web generated lexicon is outperforming the other lexicons generated from other manually annotate lexical resources like WordNet. The authors in developed two domain specific sentiment lexicons (historical and online community specific) from historical corpus of 150 years and online community data using word embedding with label propagation algorithm to expand small list of seed terms. It achieves competitive performance with approaches relying on hand curated lexicons. This revealed that there is sentiment change of words either positively to negatively or vice-versa through time. Lexical graph is constructed using PPMI matrix computed from word embedding. To fill the edges of two nodes (w i, w j), cosine similarity is computed. To propagate sentiment from seeds in lexical graph, random walk algorithm is adapted. That says, the polarity score of a seed set is proportional to probability of random walk from the seed set hitting that word. The generated lexicon from domain specific embedding outperforms very well when compared with the baseline and other variants. Our work is closely associated to the work of. generated emotion based lexicon by bootstrapping corpus using word distributional semantics (i.e. using PPMI). Our approach is different from their work in that we generated sentiment lexicon rather than emotion lexicon. The other thing is that the approach of propagating sentiment to expand the seeds is also different. We used cosine similarity of the mean vector of seed words to the corresponding word vectors in the vocabulary of the PPMI matrix. Besides, the threshold selection, the seed words part of speech are different from language to language. For example, Amharic has few adverb classes unlike Italian. Thus, our seed words do not contain adverbs. There are variety of corpus based strategies that include count based(e.g. PPMI) and predictive based(e.g. word embedding) approaches. In this part, we present the proposed count based approach to generate Amharic Sentiment lexicon from a corpus. In Figure 1, we present the proposed framework of corpus based approach to generate Amharic Sentiment lexicon. The framework has four components: (Amharic News) Corpus Collections, Preprocessing Module, PPMI Matix of Word-Context, Algorithm to generate (Amharic) Sentiment Lexicon ing in the Generated (Amharic) Sentiment Lexicon. The algorithm and the seeds in figure 1 are briefly described as follows. To generate Amharic Sentiment lexicon, we follow four major steps: 1. Prepare a set of seed lists which are strongly negatively and positively polarized Adjectives, Nouns and Verbs (Note: Amharic language contains few adverbs , adverbs are not taken as seed word). We will select at least seven most polarized seed words for each of aforementioned part-of-speech classes . Selection of seed words is the most critical that affects the performance of bootstrapping algorithm . Most authors choose the most frequently occurring words in the corpus as seed list. This is assumed to ensure the greatest amount of contextual information to learn from, however, we are not sure about the quality of the contexts. We adapt and follow seed selection guidelines of. After we tried seed selection based on , we update the original seed words. Sample summary of seeds are presented in Table 1. 2. Build semantic space word-context matrix using the number of occurrences(frequency) of target word with its context words of window size±2. Word-Context matrix is selected as it is dense and good for building word rich representations (i.e. similarity of words) unlike word-document design which is sparse and computationally expensive . Initially, let F be word-context raw frequency matrix with n r rows and n c columns formed from Amharic text corpora. Next, we apply weighting functions to select word semantic similarity descriminant features. There are variety of weighting functions to get meaningful semantic similarity between a word and its context. The most popular one is Point-wise Mutual Information (PMI) . In our case we use positive PMI by assigning 0 if it is less than 0 . Then, let X be new PMI based matrix that will be obtained by applying Positive PMI(PPMI) to matrix F. Matrix X will have the same number of rows and columns matrix F. The value of an element f ij is the number of times that word w i occurs in the context c j in matrix F. Then, the corresponding element x ij in the new matrix X would be defined as follows: Where, P M I(w i, c j) is the Point-wise Mutual Information that measures the estimated co-occurrence of word w i and its context c j and is given as: Where, P (w i, c j) is the estimated probability that the word w i occurs in the context of c j, P (w i) is the estimated probability of w i and P (c j) is the estimated probability of c i are defined in terms of frequency f ij. 3. Compute the cosine distance between target term and centroid of seed lists (e.g. centroid for positive adjective seeds, − − → µ + adj). To find the cosine distance of a new word from seed list, first we compute the centroids of seed lists of respective POS classes; for example, centroids for positive seeds S+ and negative seeds S-, for adjective class is given by: Similarly, centroids of the other seed classes will be found. Then, the cosine distances of target word from positive and negative adjective seeds of centroids, As word-context matrix x is vector space model, the cosine of the angle between two words vectors is the same as the inner product of the normalized unit word vectors. After we have cosine distances between word w i and seed with centroid w i, µ + adj, the similarity measure can be found using either: Similarly, the similarity score, Sim(w i, − − → µ − adj) can also be computed. This similarity score for each target word is mapped and scaled to appropriate real number. A target word whose sentiment score is below or above a particular threshold can be added to that sentiment dictionary in ranked order based on PMI based cosine distances. We choose positive PMI with cosine measure as it is performed consistently better than the other combination features with similarity metrics: Hellinger, Kullback-Leibler, City Block, Bhattacharya and Euclidean . 4. Repeat from step 3 for the next target term in the matrix to expand lexicon dictionary. Stop after a number of iterations defined by a threshold acquired experimental testing. The detail algorithm for generating Amharic sentiment lexicon from PPMI is presented in algorithm 1 Algorithm description: The algorithm 1 reads the seed words and generates the merge of expanded seed words using PPMI. Line 1 loads the seed words and assigns to their corresponding category of seed words. Similarly, from line 2 to 6 loads the necessary lexical resources such as PPMI matrix, vocabulary list, Amharic-English, AmharicAmharic, Amharic-Sentiment SWN and in line 7, the output Amharic Sentiment Lex. by PPMI is initialized to Null. From line 8 to 22, it iterates for each seed words polarity and categories. That is, line 9 to 11 checks that each seed term is found in the corpus vocabulary. Line 12 initializes the threshold by a selected trial number(in our case 100,200,1000, etc.). From line 13 to 22, iterates from i=0 to threshold in order to perform a set of operations. That is, line 16 computes the mean of the seed lexicon based on equation 3 specified in the previous section. Line 17 computes the similarity between the mean vector and the PPMI word-word co occurrence matrix and returns the top i most closest terms to the mean vector based on equation 5. Lines 18-19, it removes top closest items which has different part-of-speech to the seed words. Line 20-21 check the top i closest terms are has different polarity to the seed lexicon. Remove the term from top_ten_closest_terms list Update seed_lexicon by inserting top_ten_closest_terms list AM_Lexicon_by_PPMI ←AM_Lexicon_by_PPMI + seed_lexicon; algorithm 1: Amharic Sentiment Lexicon Generation Algorithm Using PPMI finding domain dependent opinions which might not be possible by sentiment lexicon generated using dictionary based approach. The quality of this lexicon will be evaluated using similar techniques used in dictionary based approaches . However, this approach may not probably produce sentiment lexicon with large coverage as the corpus size may be insufficient to include all polarity words in Amharic language. To reduce the level of this issue, we combine the lexicons generated in both dictionary based and corpus based approaches for Amharic Sentiment classification. Using corpus based approach, Amharic sentiment lexicon is built where it allows finding domain dependent opinions which might not be possible by sentiment lexicon generated using dictionary based approach. In this section, we have attempted to develop new approaches to bootstrapping relying on word-context semantic space representation of large Amharic corpora. We have manually prepared Amharic opinion words with highest sentimental strength either positively or negatively from three parts-of -speech categories: Adjectives, Nouns and Verbs. We expanded each seed category from which the snapshot seed words are presented in Table 1 Table 2. The corpus and data sets used in this research are presented as follows: i. Amharic Corpus: The size of this corpus is 20 milion tokens (teams from Addis Ababa University et al.). This corpus is used to build PPMI matrix and also to evaluate the coverage of PPMI based lexicon. ii. Facebook Users' Comment This data set is used to build PPMI matrix and also to evaluate subjectivity detection, lexicon coverage and lexicon based sentiment classification of the generated Amharic Sentiment lexicon. The data set is manually annotated by Government Office Affairs Communication(GOAC) professional and it is labeled either positive and negative. iii. Amharic Sentiment Lexicons: The Amharic sentiment lexicons includes manual , Amharic SentiWordNet(SWN) and Amharic Semantic Orientation Calculator(SOCAL) . These lexicons are used as benchmark to compare the performance of PPMI based lexicons. Using the aforementioned corpus, Amharic News Corpus with 11433 documents and 2800 Facebook News post and Users Comments are used to build word-context PPMI. First, we tried to clean the data set. After cleansing, we tokenized and word-Context count dictionary is built. Relying on this dictionary and in turn, it is transformed into PPMI Sparse matrices. This matrix is saved in file for further tasks of Amharic Sentiment lexicon generation. The total vocabulary size is 953,406. After stop words are removed and stemming is applied, the vocabulary size is reduced to 231,270. Thus, the size of this matrix is. Based on this word-word information in this matrix, the point-wise mutual information (PMI) of word-word is computed as in equation 2. The PMI is input to further computation of our corpus based lexicon expansion algorithm 1. Finally, we generated Amharic sentiment lexicons by expanding the seeds relying on the PPMI matrix of the corpus by implementing this algorithm 1 at two threshold iteration values of 100 and 200. With these iterations, we got corpus based Amharic Sentiment lexicons of size 1811 and 3794 respectively. We think, iterations >200 is better. Thus, our further discussion on evaluation of the approach is based on the lexicon generated at 200 iteration(i.e. Lexicon size of 3794). This lexicon saved with entries containing stemmed word, part of speech, polarity strength and polarity sign. Sample of this lexicon is presented in Table 2. ['ሀሰተኛ/lair/, ሀሰት/fake/, የሀሰት/fake/, ከሀሰተኛ/from fake maker/, ሀሰተኛና/lair and/, የሀሰተኛ/for fake maker/, ለሀሰተኛ/to lair/, ሀሴት/pleasure/, ታህሳት//, ሀሰትነት/fakeness/, ሀሰትን/fake/, ከሀሰተኛው/from the lair/, ሀሰተኛው/lair/, ሀሰቱን/lair/, ከሀሰት/from lair/, ሀሰትና/fake and/, ከሀሰቱ/from the fake/, ሀሴትና/pleasure and/, ሀሴትን/the pleasure/, ሀሳት/fake/, ለሀሰትና/to fake and/, ሁሰት//, የሀሰትና/for fake and/, ለሀሰት/to fake/, ሀሰቶች/alot of fake/,etc. We will evaluate in three ways: external to lexicon and internal to lexicon. External to lexicon is to test the usefulness and the correctness of each of the lexicon to find sentiment score of sentiment labeled Amharic comments corpus. Internal evaluation is compute the degree to which each of the generated lexicons are overlapped( or agreed) with manual, SOCAL and SWN(Amharic) Sentiment lexicons. In this part, we evaluate the accuracy of the subjectivity detection rate of generated PPMI based Amharic lexicon on 2800 facebook annotated comments. The procedures for aggregating sentiment score is done by summing up the positive and negative sentiment values of opinion words found in the comment if those opinion words are also found in the sentiment lexicon. If the sum of positive score greater or less than sum of negative score, then the comment is said to be a subjective comment, otherwise the comment is either neutral or mixed. Based on this technique, the subjectivity detection rate is presented in Table 3. Discussion: As subjectivity detection rate of the PPMI lexicon and others are depicted in Table 3, the detection rate of PPMI lexicon performs better than the baseline(manual lexicon). Where as Lexicon from SWN outperforms the PPMI based Lexicon with 2% accuracy. This is to evaluate the coverage of the generated lexicon externally by using the aforementioned Amharic corpus(both facebook comments and general corpus). That is, the coverage of PPMI based Amharic Sentiment Lexicon on facebook comments and a general Amharic corpus is computed by counting the occurrence tokens of the corpus in the generated sentiment lexicons and both positive and negative counts are computed in percent and it presented in Table 4. Table 4 depicted that the coverage of PPMI based Amharic sentiment lexicon is better than the manual lexicon and SOCAL. However, it has less coverage than SWN. Unlike SWN, PPMI based lexicon is generated from corpus. Due to this reason its coverage to work on a general domain is limited. It also demonstrated that the positive and negative count in almost all lexicons seems to have balanced and uniform distribution of sentiment polarity terms in the corpus. In this part, we will evaluate to what extent the generated PPMI based Lexicon agreed or overlapped with the other lexicons. This type of evaluation (internal) which validates by stating the percentage of entries in the lexicons are available in common. The more percentage means the better agreement or overlap of lexicons. The agreement of PPMI based in percentage is presented in Table 5. Discussion: Table 5 presents the extent to which the PPMI based lexicon is agreed with other lexicons. PPMI based lexicon has got the highest agreement rate (overlap) with SWN lexicon than the other lexicons. In this part, Table 6 presents the lexicon based sentiment classification performance of generated PPMI based Amharic lexicon on 2821 annotated Amharic Facebook Comments. The classification accuracy of generated PPMI based Lexicon and other lexicons are compared. Discussion: Besides the other evaluations of the generated PPMI based lexicon, the usefulness of this lexicon is tested on actual lexicon based Amharic sentiment classification. As depicted in Table 6 The accuracy of PPMI based lexicon for lexicon based sentiment classification is better than the manual benchmark lexicon. As discussed on dictionary based lexicons for lexicon based sentiment classification in earlier section, using stemming and negation handling are far improving the performance lexicon based classification. Besides combination of lexicons outperforms better than the individual lexicon. Creating a sentiment lexicon generation is not an objective process. The generated lexicon is dependent on the task it is applied. Thus, in this work we have seen that it is possible to create Sentiment lexicon for low resourced languages from corpus. This captures the language specific features and connotations related to the culture where the language is spoken. This can not be handled using dictionary based approach that propagates labels from resource rich languages. To the best of our knowledge, the the PPMI based approach to generate Amharic Sentiment lexicon form corpus is performed for first time for Amharic language with almost minimal costs and time. Thus, the generated lexicons can be used in combination with other sentiment lexicons to enhance the performance of sentiment classifications in Amharic language. The approach is a generic approach which can be adapted to other resource limited languages to reduce cost of human annotation and the time it takes to annotated sentiment lexicons. Though the PPMI based Amharic Sentiment lexicon outperforms the manual lexicon, prediction (word embedding) based approach is recommended to generate sentiment lexicon for Amharic language to handle context sensitive terms. | Corpus based Algorithm is developed generate Amharic Sentiment lexicon relying on corpus | 626 | scitldr |
Optimistic initialisation is an effective strategy for efficient exploration in reinforcement learning (RL). In the tabular case, all provably efficient model-free algorithms rely on it. However, model-free deep RL algorithms do not use optimistic initialisation despite taking inspiration from these provably efficient tabular algorithms. In particular, in scenarios with only positive rewards, Q-values are initialised at their lowest possible values due to commonly used network initialisation schemes, a pessimistic initialisation. Merely initialising the network to output optimistic Q-values is not enough, since we cannot ensure that they remain optimistic for novel state-action pairs, which is crucial for exploration. We propose a simple count-based augmentation to pessimistically initialised Q-values that separates the source of optimism from the neural network. We show that this scheme is provably efficient in the tabular setting and extend it to the deep RL setting. Our algorithm, Optimistic Pessimistically Initialised Q-Learning (OPIQ), augments the Q-value estimates of a DQN-based agent with count-derived bonuses to ensure optimism during both action selection and bootstrapping. We show that OPIQ outperforms non-optimistic DQN variants that utilise a pseudocount-based intrinsic motivation in hard exploration tasks, and that it predicts optimistic estimates for novel state-action pairs. In reinforcement learning (RL), exploration is crucial for gathering sufficient data to infer a good control policy. As environment complexity grows, exploration becomes more challenging and simple randomisation strategies become inefficient. While most provably efficient methods for tabular RL are model-based (; ;), in deep RL, learning models that are useful for planning is notoriously difficult and often more complex than modelfree methods. Consequently, model-free approaches have shown the best final performance on large complex tasks (; 2016;), especially those requiring hard exploration . Therefore, in this paper, we focus on how to devise model-free RL algorithms for efficient exploration that scale to large complex state spaces and have strong theoretical underpinnings. Despite taking inspiration from tabular algorithms, current model-free approaches to exploration in deep RL do not employ optimistic initialisation, which is crucial to provably efficient exploration in all model-free tabular algorithms. This is because deep RL algorithms do not pay special attention to the initialisation of the neural networks and instead use common initialisation schemes that yield initial Q-values around zero. In the common case of non-negative rewards, this means Q-values are initialised to their lowest possible values, i.e., a pessimistic initialisation. While initialising a neural network optimistically would be trivial, e.g., by setting the bias of the final layer of the network, the uncontrolled generalisation in neural networks changes this initialisation quickly. Instead, to benefit exploration, we require the Q-values for novel state-action pairs must remain high until they are explored. An empirically successful approach to exploration in deep RL, especially when reward is sparse, is intrinsic motivation . A popular variant is based on pseudocounts , which derive an intrinsic bonus from approximate visitation counts over states and is inspired by the tabular MBIE-EB algorithm . However, adding a positive intrinsic bonus to the reward yields optimistic Q-values only for state-action pairs that have already been chosen sufficiently often. Incentives to explore unvisited states rely therefore on the generalisation of the neural network. Exactly how the network generalises to those novel state-action pairs is unknown, and thus it is unclear whether those estimates are optimistic when compared to nearby visited state-action pairs. Figure 1 Consider the simple example with a single state and two actions shown in Figure 1. The left action yields +0.1 reward and the right action yields +1 reward. An agent whose Q-value estimates have been zero-initialised must at the first time step select an action randomly. As both actions are underestimated, this will increase the estimate of the chosen action. Greedy agents always pick the action with the largest Q-value estimate and will select the same action forever, failing to explore the alternative. Whether the agent learns the optimal policy or not is thus decided purely at random based on the initial Q-value estimates. This effect will only be amplified by intrinsic reward. To ensure optimism in unvisited, novel state-action pairs, we introduce Optimistic Pessimistically Initialised Q-Learning (OPIQ). OPIQ does not rely on an optimistic initialisation to ensure efficient exploration, but instead augments the Q-value estimates with count-based bonuses in the following manner: where N (s, a) is the number of times a state-action pair has been visited and M, C > 0 are hyperparameters. These Q + -values are then used for both action selection and during bootstrapping, unlike the above methods which only utilise Q-values during these steps. This allows OPIQ to maintain optimism when selecting actions and bootstrapping, since the Q + -values can be optimistic even when the Q-values are not. In the tabular domain, we base OPIQ on UCB-H , a simple online Q-learning algorithm that uses count-based intrinsic rewards and optimistic initialisation. Instead of optimistically initialising the Q-values, we pessimistically initialise them and use Q + -values during action selection and bootstrapping. Pessimistic initialisation is used to enable a worst case analysis where all of our Q-value estimates underestimate Q * and is not a requirement for OPIQ. We show that these modifications retain the theoretical guarantees of UCB-H. Furthermore, our algorithm easily extends to the Deep RL setting. The primary difficulty lies in obtaining appropriate state-action counts in high-dimensional and/or continuous state spaces, which has been tackled by a variety of approaches (; ; ; a) and is orthogonal to our contributions. We demonstrate clear performance improvements in sparse reward tasks over 1) a baseline DQN that just uses intrinsic motivation derived from the approximate counts, 2) simpler schemes that aim for an optimistic initialisation when using neural networks, and 3) strong exploration baselines. We show the importance of optimism during action selection for ensuring efficient exploration. Visualising the predicted Q + -values shows that they are indeed optimistic for novel state-action pairs. We consider a Markov Decision Process (MDP) defined as a tuple (S, A, P, R), where S is the state space, A is the discrete action space, P (·|s, a) is the state-transition distribution, R(·|s, a) is the distribution over rewards and γ ∈ is the discount factor. The goal of the agent is then to maximise the expected discounted sum of rewards: E[∞ t=0 γ t r t |r t ∼ R(·|s t, a t)], in the discounted episodic setting. A policy π(·|s) is a mapping from states to actions such that it is a valid probability distribution. Deep Q-Network (DQN) uses a nonlinear function approximator (a deep neural network) to estimate the action-value function, Q(s, a; θ) ≈ Q * (s, a), where θ are the parameters of the network. Exploration based on intrinsic rewards (e.g.,), which uses a DQN agent, additionally augments the observed rewards r t with a bonus β/ N (s t, a t) based on pseudo-visitation-counts N (s t, a t). The DQN parameters θ are trained by gradient descent on the mean squared regression loss L with bootstrapped'target' y t: Figure 2: A simple regression task to illustrate the effect of an optimistic initialisation in neural networks. Left: 10 different networks whose final layer biases are initialised at 3 (shown in green), and the same networks after training on the blue data points (shown in red). Right: One of the trained networks whose output has been augmented with an optimistic bias as in equation 1. The counts were obtained by computing a histogram over the input space [−2, 2] with 50 bins. The expectation is estimated with uniform samples from a replay buffer D . D stores past transitions (s t, a t, r t, s t+1), where the state s t+1 is observed after taking the action a t in state s t and receiving reward r t. To improve stability, DQN uses a target network, parameterised by θ −, which is periodically copied from the regular network and kept fixed for a number of iterations. Our method Optimistic Pessimistically Initialised Q-Learning (OPIQ) ensures optimism in the Qvalue estimates of unvisited, novel state-action pairs in order to drive exploration. This is achieved by augmenting the Q-value estimates in the following manner: and using these Q + -values during action selection and bootstrapping. In this section, we motivate OPIQ, analyse it in the tabular setting, and describe a deep RL implementation. Optimistic initialisation does not work with neural networks. For an optimistic initialisation to benefit exploration, the Q-values must start sufficiently high. More importantly, the values for unseen state-action pairs must remain high, until they are updated. When using a deep neural network to approximate the Q-values, we can initialise the network to output optimistic values, for example, by adjusting the final bias. However, after a small amount of training, the values for novel state-action pairs may not remain high. Furthermore, due to the generalisation of neural networks we cannot know how the values for these unseen state-action pairs compare to the trained state-action pairs. Figure 2 (left), which illustrates this effect for a simple regression task, shows that different initialisations can lead to dramatically different generalisations. It is therefore prohibitively difficult to use optimistic initialisation of a deep neural network to drive exploration. Instead, we augment our Q-value estimates with an optimistic bonus. Our motivation for the form of the bonus in equation 1, C (N (s,a)+1) M, stems from UCB-H , where all tabular Q-values are initialised with H and the first update for a state-action pair completely overwrites that value because the learning rate for the update (η 1) is 1. One can alternatively view these Q-values as zero-initialised with the additional term Q(s, a) + H · 1{N(s, a) < 1}, where N (s, a) is the visitation count for the state-action pair (s, a). Our approach approximates the discrete indicator function 1 as (N (s, a) + 1) −M for sufficiently large M. However, since gradient descent cannot completely overwrite the Q-value estimate for a state-action pair after a single update, it is beneficial to have a smaller hyperparameter M that governs how quickly the optimism decays. for each timestep t = 1,..., H do Take action a t ← arg max a Q + t (s t, a). Receive r(s t, a t, t) and s t+1. Increment N (s t, a t, t). For a worst case analysis we assume all Q-value estimates are pessimistic. In the common scenario where all rewards are nonnegative, the lowest possible return for an episode is zero. If we then zero-initialise our Q-value estimates, as is common for neural networks, we are starting with a pessimistic initialisation. As shown in Figure 2 (left), we cannot predict how a neural network will generalise, and thus we cannot predict if the Q-value estimates for unvisited state-action pairs will be optimistic or pessimistic. We thus assume they are pessimistic in order to perform a worst case analysis. However, this is not a requirement: our method works with any initialisation and rewards. In order to then approximate an optimistic initialisation, the scaling parameter C in equation 1 can be chosen to guarantee unseen Q + -values are overestimated, for example, C:= H in the undiscounted finite-horizon tabular setting and C:= 1/(1 − γ) in the discounted episodic setting (assuming 1 is the maximum reward obtainable at each timestep). However, in some environments it may be beneficial to use a smaller parameter C for faster convergence. These Q + -values are then used both during action selection and during bootstrapping. Note that in the finite horizon setting the counts N, and thus Q +, would depend on the timestep t. Hence, we split the optimistic Q + -values into two parts: a pessimistic Q-value component and an optimistic component based solely on the counts for a state-action pair. This separates our source of optimism from the neural network function approximator, yielding Q + -values that remain high for unvisited state-action pairs, assuming a suitable counting scheme. Figure 2 (right) shows the effects of adding this optimistic component to a network's outputs. + -values provide an increased incentive to explore. By using optimistic Q + estimates, especially during action selection and bootstrapping, the agent is incentivised to try and visit novel state-action pairs. Being optimistic during action selection in particular encourages the agent to try novel actions that have not yet been visitied. Without an optimistic estimate for novel state-action pairs the agent would have no incentive to try an action it has never taken before at a given state. Being optimistic during bootstrapping ensures the agent is incentivised to return to states in which it has not yet tried every action. This is because the maximum Q + -value will be large due to the optimism bonus. Both of these effects lead to a strong incentive to explore novel state-action pairs. In order to ensure that OPIQ has a strong theoretical foundation, we must ensure it is provably efficient in the tabular domain. We restrict our analysis to the finite horizon tabular setting and only consider building upon UCB-H for simplicity. Achieving a better regret bound using UCB-B and extending the analysis to the infinite horizon discounted setting are steps for future work. Our algorithm removes the optimistic initialisation of UCB-H, instead using a pessimistic initialisation (all Q-values start at 0). We then use our Q + -values during action selection and bootstrapping. Pseudocode is presented in Algorithm 1. Theorem 1. For any p ∈, with probability at least 1 − p the total regret of Q + is at most The proof is based on that of Theorem 1 from . Our Q + -values are always greater than or equal to the Q-values that UCB-H would estimate, thus ensuring that our estimates are also greater than or equal to Q *. Our overestimation relative to UCB-H is then governed by the quantity H/(N (s, a) + 1) M, which when summed over all timesteps does not depend on T for M > 1. As M → ∞ we exactly recover UCB-H, and match the asymptotic performance of UCB-H for M ≥ 1. Smaller values of M in our optimism decaying more slowly, which in more exploration. The full proof is included in Appendix I. We also show that OPIQ without optimistic action selection or the count-based intrinsic motivation term b T N is not provably efficient by showing it can incur linear regret with high probability on simple MDPs (see Appendices G and H). Our primary motivation for considering a tabular algorithm that pessimistically initialises its Q-values, is to provide a firm theoretical foundation on which to base a deep RL algorithm, which we describe in the next section. For deep RL, we base OPIQ on DQN , which uses a deep neural network with parameters θ as a function approximator Q θ. During action selection, we use our Q + -values to determine the greedy action: where C action is a hyperparameter governing the scale of the optimistic bias during action selection. In practice, we use an -greedy policy. After every timestep, we sample a batch of experiences from our experience replay buffer, and use n-step Q-learning . We recompute the counts for each relevant state-action pair, to avoid using stale pseudo-rewards. The network is trained by gradient decent on the loss in equation 2 with the target: where C bootstrap is a hyperparameter that governs the scale of the optimistic bias during bootstrapping. For our final experiments on Montezuma's Revenge we additionally use the Mixed Monte Carlo (MMC) target , which mixes the target with the environmental monte carlo return for that episode. Further details are included in Appendix D.4. We use the method of static hashing to obtain our pseudocounts on the first 2 of 3 environments we test on. For our experiments on Montezuma's Revenge we count over a downsampled image of the current game frame. More details can be found in Appendix B. A DQN with pseudocount derived intrinsic reward (DQN + PC) can be seen as a naive extension of UCB-H to the deep RL setting. However, it does not attempt to ensure optimism in the Q-values used during action selection and bootstrapping, which is a crucial component of UCB-H. Furthermore, even if the Q-values were initialised optimistically at the start of training they would not remain optimistic long enough to drive exploration, due to the use of neural networks. OPIQ, on the other hand, is designed with these limitations of neural networks in mind. By augmenting the neural network's Q-value estimates with optimistic bonuses of the form C (N (s,a)+1) M, OPIQ ensures that the Q + -values used during action selection and bootstrapping are optimistic. We can thus consider OPIQ as a deep version of UCB-H. Our show that optimism during action selection and bootstrapping is extremely important for ensuring efficient exploration. Tabular Domain: There is a wealth of literature related to provably efficient exploration in the tabular domain. Popular model-based algorithms such as R-MAX , MBIE (and MBIE-EB) , UCRL2 and UCBVI are all based on the principle of optimism in the face of uncertainty. adopt a Bayesian viewpoint and argue that posterior sampling (PSRL) is more practically efficient than approaches that are optimistic in the face of uncertainty, and prove that in Bayesian expectation PSRL matches the performance of any optimistic algorithm up to constant factors. prove that an optimistic variant of PSRL is provably efficient under a frequentist regret bound. The only provably efficient model-free algorithms to date are delayed Q-learning and UCB-H (and UCB-B) . Delayed Q-learning optimistically initialises the Q-values that are carefully controlled when they are updated. UCB-H and UCB-B also optimistically initialise the Q-values, but also utilise a count-based intrinsic motivation term and a special learning rate to achieve a favourable regret bound compared to model-based algorithms. In contrast, OPIQ pessimistically initialises the Q-values. Whilst we base our current analysis on UCB-H, the idea of augmenting pessimistically initialised Q-values can be applied to any model-free algorithm. Deep RL Setting: A popular approach to improving exploration in deep RL is to utilise intrinsic motivation , which computes a quantity to add to the environmental reward. Most relevant to our work is that of , which takes inspiration from MBIE-EB . utilise the number of times a state has been visited to compute the intrinsic reward. They outline a framework for obtaining approximate counts, dubbed pseudocounts, through a learned density model over the state space. show that RLSVI achieves provably efficient Bayesian expected regret, which requires a prior distribution over MDPs, whereas OPIQ achieves provably efficient worse case regret. Bootstrapped DQN with a prior is thus a model-free algorithm that has strong theoretical support in the tabular setting. Empirically, however, its performance on sparse reward tasks is worse than DQN with pseudocounts. shift and scale the rewards so that a zero-initialisation is optimistic. When applied to neural networks this approach does not in optimistic Q-values due to the generalisation of the networks. empirically show that using a pseudocount intrinsic motivation term performs much better empirically on hard exploration tasks. attempt to generalise the notion of a count to include information about the counts of future state-actions pairs in a trajectory, which they use to provide bonuses during action selection. extend delayed Q-learning to utilise these generalised counts and prove the scheme is PAC-MDP. The generalised counts are obtained through E-values which are learnt using SARSA with a constant 0 reward and E-value estimates initialised at 1. When scaling to the deep RL setting, these E-values are estimated using neural networks that cannot maintain their initialisation for unvisited state-action pairs, which is crucial for providing an incentive to explore. By contrast, OPIQ uses a separate source to generate the optimism necessary to explore the environment. We compare OPIQ against baselines and ablations on three sparse reward environments. The first is a randomized version of the Chain environment proposed by and used in with a chain of length 100, which we call Randomised Chain. The second is a two-dimensional maze in which the agent starts in the top left corner (white dot) and is only rewarded upon finding the goal (light grey dot). We use an image of the maze as input and randomise the actions similarly to the chain. The third is Montezuma's Revenge from Arcade Learning environment , a notoriously difficult sparse reward environment commonly used as a benchmark to evaluate the performance and scaling of Deep RL exploration algorithms. See Appendix D for further details on the environments, baselines and hyperparameters used. We compare OPIQ against a variety of DQN-based approaches that use pseudocount intrinsic rewards, the DORA agent (which generates count-like optimism bonuses using a neural network), and two strong exploration baselines: -greedy DQN: a standard DQN that uses an -greedy policy to encourage exploration. We anneal linearly over a fixed number of timesteps from 1 to 0.01. DQN + PC: we add an intrinsic reward of β/ N (s, a) to the environmental reward based on . DQN R-Subtract (+PC): we subtract a constant from all environmental rewards received when training, so that a zero-initialisation is optimistic, as described for a DQN in and based on. DQN Bias (+PC): we initialise the bias of the final layer of the DQN to a positive value at the start of training as a simple method for optimistic initialisation with neural networks. DQN + DORA: we use the generalised counts from as an intrinsic reward. DQN + DORA OA: we additionally use the generalised counts to provide an optimistic bonus during action selection. DQN + RND: we add the RND bonus from as an intrinsic reward. BSP: we use Bootstrapped DQN with randomised prior functions . In order to better understand the importance of each component of our method, we also evaluate the following ablations: Optimistic Action Selection (OPIQ w/o OB): we only use our Q + -values during action selection, and use Q during bootstrapping (without Optimistic Bootstrapping). The intrinsic motivation term remains. Optimistic Action Selection and Bootstrapping (OPIQ w/o PC): we use our Q + -values during action selection and bootstrapping, but do not include an intrinsic motivation term (without Pseudo Counts). We first consider the visually simple domain of the randomised chain and compare the count-based methods. Figure 3 shows the performance of OPIQ compared to the baselines and ablations. OPIQ significantly outperforms the baselines, which do not have any explicit mechanism for optimism during action selection. A DQN with pseudocount derived intrinsic rewards is unable to reliably find the goal state, but setting the final layer's bias to one produces much better performance. For the DQN variant in which a constant is subtracted from all rewards, all of the configurations (including those with pseudocount derived intrinsic bonuses) were unable to find the goal on the right and thus the agents learn quickly to latch on the inferior reward of moving left. Compared to its ablations, OPIQ is more stable in this task. OPIQ without pseudocounts performs similarly to OPIQ but is more varied across seeds, whereas the lack of optimistic bootstrapping in worse performance and significantly more variance across seeds. We next consider the harder and more visually complex task of the Maze and compare against all baselines. Figure 4 shows that only OPIQ is able to find the goal in the sparse reward maze. This indicates that explicitly ensuring optimism during action selection and bootstrapping can have a significant positive impact in sparse reward tasks, and that a naive extension of UCB-H to the deep RL setting (DQN + PC) in insufficient exploration. Figure 4 (right) shows that attempting to ensure optimistic Q-values by adjusting the bias of the final layer (DQN Bias + PC), or by subtracting a constant from the reward (DQN R-Subtract + PC) has very little effect. As expected DQN + RND performs poorly on this domain compared to the pseudocount based methods. The visual input does not vary much across the state space, ing in the RND bonus failing to provide enough intrinsic motivation to ensure efficient exploration. Additionally it does not feature any explicit mechanism for optimism during action selection, and thus Figure 4 (right) shows it explores the environment relatively slowly. Both DQN+DORA and DQN+DORA OA also perform poorly in this domain since their source of intrinsic motivation disappears quickly. As noted in Figure 2, neural networks do not maintain their starting initialisations after training. Thus, the intrinsic reward DORA produces goes to 0 quickly since the network producing its bonuses learns to generalise quickly. BSP is the only exploration baseline we test that does not add an intrinsic reward to the environmental reward, and thus it performs poorly compared to the other baselines on this environment. Figure 5 shows that OPIQ and all its ablations manage to find the goal in the maze. OPIQ also explores slightly faster than its ablations (right), which shows the benefits of optimism during both action selection and bootstrapping. In addition, the episodic reward for the the ablation without optimistic bootstrapping is noticeably more unstable (Figure 5, left). Interestingly, OPIQ without pseudocounts performs significantly worse than the other ablations. This is surprising since the theory suggests that the count-based intrinsic motivation is only required when the reward or transitions of the MDP are stochastic , which is not the case here. We hypothesise that adding PC-derived intrinsic bonuses to the reward provides an easier learning problem, especially when using n-step Q-Learning, which yields the performance gap. However, our show that the PC-derived intrinsic bonuses are not enough on their own to ensure sufficient exploration. The large difference in performance between DQN + PC and OPIQ w/o OB is important, since they only differ in the use of optimistic action selection. The in Figures 4 and 5 show that optimism during action selection is extremely important in exploring the environment efficiently. Intuitively, this makes sense, since this provides an incentive for the agent to try actions it has never tried before, which is crucial in exploration. Finally, we consider Montezuma's Revenge, one of the hardest sparse reward games from the ALE . Note that we only train up to 12.5mil timesteps (50mil frames), a 1/4 of the usual training time (50mil timesteps, 200mil frames). Figure 7 shows that OPIQ significantly outperforms the baselines in terms of the episodic reward and the maximum episodic reward achieved during training. The higher episode reward and much higher maximum episode reward of OPIQ compared to DQN + PC once again demonstrates the importance of optimism during action selection and bootstrapping. In this environment BSP performs much better than in the Maze, but achieves significantly lower episodic rewards than OPIQ. Figure 8 shows the distinct number of rooms visited across the training period. We can see that OPIQ manages to reliably explore 12 rooms during the 12.5mil timesteps, significantly more than the other methods, thus demonstrating its improved exploration in this complex environment. Our on this challenging environment show that OPIQ can scale to high dimensional complex environments and continue to provide significant performance improvements over an agent only using pseudocount based intrinsic rewards. This paper presented OPIQ, a model-free algorithm that does not rely on an optimistic initialisation to ensure efficient exploration. Instead, OPIQ augments the Q-values estimates with a count-based optimism bonus. We showed that this is provably efficient in the tabular setting by modifying UCB-H to use a pessimistic initialisation and our augmented Q + -values for action selection and bootstrapping. Since our method does not rely on a specific initialisation scheme, it easily scales to deep RL when paired with an appropriate counting scheme. Our showed the benefits of maintaining optimism both during action selection and bootstrapping for exploration on a number of hard sparse reward environments including Montezuma's Revenge. In future work, we aim to extend OPIQ by integrating it with more expressive counting schemes. For the tabular setting, we consider a discrete finite-horizon Markov Decision Process (MDP), which can be defined as a tuple (S, A, {P t}, {R t}, H, ρ), where S is the finite state space, A is the finite action space, P t (·|s, a) is the state-transition distribution for timestep t = 1,..., H, R t (·|s, a) is the distribution over rewards after taking action a in state s, H is the horizon, and ρ is the distribution over starting states. Without loss of generality we assume that R t (·|s, a) ∈. We use S and A to denote the number of states and the number of actions, respectively, and N (s, a, t) as the number of times a state-action pair (s, a) has been visited at timestep t. Our goal is to find a set of policies π t: S → A, π:= {π t}, that chooses the agent's actions at time t such that the expected sum of future rewards is maximised. To this end we define the Q-value at time t of a given policy π as Q π t (s, a):= E r + Q π t+1 (s, π t+1 (s)) | r∼Rt(·|s,a), s ∼Pt(·|s,a), where Q π t (s, a) = 0, ∀t > H. The agent interacts with the environment for K episodes, T:= KH, yielding a total regret: refers to the starting state and π k to the policy at the beginning of episode k. We are interested in bounding the worst case total regret with probability 1 − p, 0 < p < 1. ) is an online Q-learning algorithm for the finite-horizon setting outlined above where the worse case total regret is bounded with a probability of 1−p by O(H 4 SAT log(SAT /p). All Q-values for timesteps t ≤ H are optimistically initialised at H. The learning rate is defined as η N = H+1 H+N, where N:= N (s t, a t, t) is the number of times state-action pair (s t, a t) has been observed at step t and η 1 = 1 at the first encounter of any state-action pair. The update rule for a transition at step t from state s t to s t+1, after executing action a t and receiving reward r t, is: where b, is the count-based intrinsic motivation term. In deep RL, the primary difficulty for exploration based on count-based intrinsic rewards is obtaining appropriate state-action counts. In this paper we utilise approximate counting schemes (; ;) in order to cope with continuous and/or highdimensional state spaces. In particular, for the chain and maze environments we use static hashing , which projects a state s to a low-dimensional feature vector φ(s) = sign(Af (s)), where f flattens the state s into a single dimension of length D; A is a k × D matrix whose entries are initialised i.i.d. from a unit Gaussian: N; and k is a hyperparameter controling the granularity of counting: higher k leads to more distinguishable states at the expense of generalisation. Given the vector φ(s), we use a counting bloom filter to update and retrieve its counts efficiently. To obtain counts N (s, a) for state-action pairs, we maintain a separate data structure of counts for each action (the same vector φ(s) is used for all actions). This counting scheme is tabular and hence the counts for sufficiently different states do not interfere with one another. This ensures Q + -values for unseen state-action pairs in equation 1 are large. For our experiments on Montezuma's Revenge we use the same method of downsampling as in , in which the greyscale state representation is resized from (42x42) to (11x8) and then binned from {0, ..., 255} into 8 categories. We then maintain tabular counts over the new representation. The granularity of the counting scheme is an important modelling consideration. If it is too granular, then it will assign an optimistic bias in regions of the state space where the network should be trusted A 2-dimensional gridworld maze with a sparse reward in which the agent can move Up, Down, Left or Right. The agent starts each episode at a fixed location and must traverse through the maze in order to find the goal which provides +10 reward and terminates the episode, all other rewards are 0. The agent interacts with the maze for 250 timesteps before being reset. Empty space is represented by a 0, walls are 1, the goal is 2 and the player is 3. The state representation is a greyscaled image of the entire grid where each entry is divided by 3 to lie in. The shape of the state representation is:. Once again the effect of each action is randomised at each state at the beginning of training. Figure 11 shows the structure of the maze environment. We follow the suggestions in (b) and use the same environmental setup as used in . Specifically, we use stick actions with a probability of p = 0.25, a frame skip of 4 and do not show a terminal state on the loss of life. In all experiments we set γ = 0.99, use RMSProp with a learning rate of 0.0005 and scale the gradient norms during training to be at most 5. The network used is a MLP with 2 hidden layers of 256 units and ReLU non-linearities. We use 1 step Q-Learning. Training lasts for 100k timesteps. is fixed at 0.01 for all methods except for -greedy DQN in which it is linearly decayed from 1 to 0.01 over {100, 50k, 100k} timesteps. We train on a batch size of 64 after every timestep with a replay buffer of size 10k. The target network is updated every 200 timesteps. The embedding size used for the counts is 32. We set β = 0.1 for the scale of the count-based intrinsic motivation. For reward subtraction we consider subtracting {0.1, 1, 10} from the reward. For an optimistic initialisation bias, we consider setting the final layer's bias to {0.1, 1, 10}. We consider both of the methods with and without count-based intrinsic motivation. For OPIQ and its ablations we consider: M ∈ {0.1, 0.5, 2, 10}, C action ∈ {0.1, 1, 10}, C bootstrap ∈ {0.01, 0.1, 1, 10}. For all methods we run 20 independent runs across the cross-product of all relevant parameters considered. We then sort them by the median test reward (largest area underneath the line) and report the median, lower and upper quartiles. The best hyperparameters we found were: DQN -greedy: Decay rate: 100 timesteps. Optimistic Initialisation Bias: Bias: 1, Pseudocount intrinsic motivation: True. Reward Subtraction: Constant to subtract: 1, Pseudocount intrinsic motivation: False. OPIQ: M: 0.5, C action: 1, C bootstrap: 1. OPIQ without Optimistic Bootstrapping: M: 2, C action: 10. OPIQ without Pseudocounts: M: 2, C action: 10, C bootstrap: 10. For We use 3 step Q-Learning. Training lasts for 1mil timesteps. is decayed linearly from 1 to 0.01 over 50k timesteps for all methods except for -greedy DQN in which it is linearly decayed from 1 to 0.01 over {100, 50k, 100k} timesteps. We train on a batch of 64 after every timestep with a replay buffer of size 250k. The target network is updated every 1000 timesteps. The embedding dimension for the counts is 128. For DQN + PC we consider β ∈ {0.01, 0.1, 1, 10, 100}. For all other methods we set β = 0.1 as it performed best. For reward subtraction we consider subtracting {0.1, 1, 10} from the reward. For an optimistic initialisation bias, we consider setting the final layer's bias to {0.1, 1, 10}. Both methods utilise a count-based intrinsic motivation. For OPIQ and its ablations we set M = 2 since it worked best in preliminary experiments. We consider: C action ∈ {0.1, 1, 10, 100}, C bootstrap ∈ {0.01, 0.1, 1, 10}. For the RND bonus we use the same architecture as the DQN for both the target and predictor networks, except the output is of size 128 instead of |A|. We scale the squared error by β rnd ∈ {0.001, 0.01, 0.1, 1, 10, 100}: For DQN + DORA we use the same architecture for the E-network as the DQN. We add a sigmoid non-linearity to the output and initialise the final layer's weights and bias to 0 as described in . We sweep across the scale of the intrinsic reward β dora ∈ {}. For DQN + DORA OA we use β dora = and sweep across β dora_action ∈ {}. For BSP we use the following architecture: We use K = 10 different bootstrapped DQN heads, and sweep over β bsp ∈ {0.1, 1, 3, 10, 30, 100}. For all methods we run 8 independent runs across the cross-product of all relevant parameters considered. We then sort them by the median episodic reward (largest area underneath the line) and report the median, lower and upper quartiles. The best hyperparameters we found were: OPIQ without Optimistic Bootstrapping: M: 2, C action: 100. OPIQ without Pseudocounts: M: 2, C action: 100, C bootstrap: 0.1. The network used is the standard DQN used for Atari . We use 3 step Q-Learning. Training lasts for 12.5mil timesteps (50mil frames in Atari). is decayed linearly from 1 to 0.01 over 1mil timesteps. We train on a batch of 32 after every 4th timestep with a replay buffer of size 1mil. The target network is updated every 8000 timesteps. For all methods we consider β mmc ∈ {0.005, 0.01, 0.025}. For DQN + PC we consider β ∈ {0.01, 0.1, 1}. For OPIQ and its ablations we set M = 2. We consider: C action ∈ {0.1, 1}, C bootstrap ∈ {0.01, 0.1}, β ∈ {0.01, 0.1}. For the RND bonus we use the same architectures as in (target network is smaller than the learned predictor network) except we use ReLU non-linearity. The output is the same of size 512. We scale the squared error by β rnd ∈ {0.001, 0.01, 0.1, 1}: For BSP we use the same architecture as in . We use K = 10 different bootstrapped DQN heads, and sweep over β bsp ∈ {0.1, 1, 10, 100}. For all methods we run 4 independent runs across the cross-product of all relevant parameters considered. We then sort them by the median maximum episodic reward (largest area underneath the line) and report the median, lower and upper quartiles. The best hyperparameters we found were: OPIQ without Optimistic Bootstrapping: M= 2, C action = 0.1, β mmc = 0.005. We do a single gradient descent step on a minibatch of the 32 most recently visited states. We also recompute the intrinsic rewards when sampling minibatches to train the DQN. The intrinsic reward used for a state s, is the squared error between the predictor network and the target network β rnd ||predictor(s) − target(s)|| 2 2. DQN + DORA: We train the E-values network using n-step SARSA (same n as the DQN) with γ E = 0.99. We maintain a replay buffer of size (batch size * 4) and sample batch size elements to train every timestep. The intrinsic reward we use is s,a). DQN + DORA OA: We train the DQN + DORA agent described above and additionally augment the Q-values used for action selection with. We train each Bootstrapped DQN head on all of the data from the replay buffer (as is done in (; 2018). We normalise the gradients of the shared part of the network by 1/K, where K is the number of heads. The output of each head is Q k + β bsp p k, where p k is a randomly initialised network (of the same architecture as Q k) which is kept fixed throughout training. β bsp is a hyperparameter governing the scale of the prior regularisation. For our experiments on Montezuma's Revenge we additionally mixed the 3 step Q-Learning target with the environmental rewards monte carlo return for the episode. That is, the 3 step targets y t become: If the episode hasn't finished yet, we used 0 for the monte carlo return. Our implementation differs from in that we do not use the intrinsic rewards as part of the monte carlo return. This is because we recompute the intrinsic rewards whenever we are using them as part of the targets for training, and recomputing all the intrinsic rewards for an entire episode (which can be over 1000 timesteps) is computationally prohibitive. E FURTHER E.1 RANDOMISED CHAIN Figure 12: The number of distinct states visited over training for the chain environment. The median across 20 seeds is plotted and the 25%-75% quartile is shown shaded. We can see that OPIQ and ablations explore the environment much more quickly than the count-based baselines. The ablation without optimistic bootstrapping exhibits significantly more variance than the other ablations, showing the importance of optimism during bootstrapping. On this simple task the ablation without count-based intrinsic motivation performs on par with the full OPIQ. This is most likely due to the simpler nature of the environment that makes propagating rewards much easier than the Maze. The importance of directed exploration is made abundantly clear by the -greedy baseline that fails to explore much of the environment. Figure 13 compares OPIQ with differing values of M. We can clearly see that a small value of 0.1 in insufficient exploration, due to the over-exploration of already visited state-action pairs. Additionally if M is too large then the rate of exploration suffers due to the decreased optimism. On this task we found that M = 0.5 performed best, but on the harder Maze environment we found that M = 2 was better in preliminary experiments. E.2 MAZE Figure 14: The Q + -values OPIQ used during bootstrapping with C bootstrap = 0.01. Figure 14 shows the values used during bootstrapping for OPIQ. These Q-values show optimism near the novel state-action pairs which provides an incentive for the agent to return to this area of the state space. E.3 MONTEZUMA'S REVENGE To emphasise the necessity of optimistic Q-value estimates during exploration, we analyse the simple failure case for pessimistically initialised greedy Q-learning provided in the introduction. We use Algorithm 1, but use Q instead of Q + for action selection. We will assume the agent will act greedily with respect to its Q-value estimates and break ties uniformly: Consider the single state MDP in Figure 17 with H = 1. We use this MDP to show that with 0.5 probability pessimistically initialised greedy Q-learning never finds the optimal policy. The agent receives a reward of +1 for selecting the right action and 0.1 otherwise. Therefore the optimal policy is to select the right action. Now consider the first episode: ∀a, Q 1 (s, a) = 0. Thus, the agent selects an action at random with uniform probability. If it selects the left action, it updates: Thus, in the second episode it selects the left action again, since Q 1 (s, L) > 0 = Q 1 (s, R). Our estimate of Q 1 (s, L) never drops below 0.1, and so the right action is never taken. Thus, with probability of 1 2 it never selects the correct action (also a linear regret of 0.9T). This counterexample applies for any non-negative intrinsic motivation (including no intrinsic motivation), and is unaffected if we utilise optimistic bootstrapping or not. Despite introducing an extra optimism term with a tunable hyperparameter M, OPIQ still requires the intrinsic motivation term b T i to ensure it does not under-explore in stochastic environments. We will prove that OPIQ without the intrinsic motivation term b T i does not satisfy Theorem 1. Specifically we will show that there exists a 1 state, 2 action MDP with stochastic reward function such that for all M > 0 the probability of incurring linear regret is greater than the allowed failure probability p. We choose to use stochastic rewards as opposed to stochastic transitions for a simpler proof. Figure 18: The parametrised MDP. The MDP we will use is shown in Figure 18, where λ > 1 and a ∈ s.t p < 1 − a. H = 1, S = 1 and A = 2. The reward function for the left action is stochastic, and will return +1 reward with probability a and 0 otherwise. The reward for the right action is always a/λ. Let p > 0, the probability with which we are allowed to incur a total regret not bounded by the theorem. OPIQ cannot depend on the value of λ or a as they are unknown. ). OPIQ will recover the sub-optimal policy of taking the right action if every time we take the left action we receive a 0 reward. This will happen since our Q + -value estimate for the left action will eventually drop below the Q-value estimate for the right action which is a/λ > 0. The sup-optimal policy will incur a linear regret, which is not bounded by the theorem. Our probability of failure is at least (1 − a) R, where R is the number of times we select the left action, which decreases as R increases. This corresponds to receiving a 0 reward for every one of the R left transitions we take. Note that (1 − a) R is an underestimate of the probability of failure. For the first 2 episodes we will select both actions, and with probability (1 − a) the left action will return 0 reward. Our Q-values will then be: It is possible to take the left action as long as since the optimistic bonus for the right action decays to 0. This then provides a very loose upper bound for R as (λ a) 1/M, which then leads to a further underestimation of the probability of failure. Assume for a contradiction that (1 − a) R < p: This provides our contradiction as we choose λ s.t M > log(λ/a)/ log(log(p)/ log(1 − a)). We can always pick such a λ because log(λ/a) can get arbitrarily close to 0. So our probability of failure (of which (1 − a) R is a severe underestimate) is greater than the allowed probability of failure p. Theorem 1. For any p ∈, with probability at least 1 − p the total regret of Q + is at most OPIQ is heavily based on UCB-H , and as such the proof very closely mirrors its proof except for a few minor differences. For completeness, we reproduce the entirety of the proof with the minor adjustments required for our scheme. The proof is concerned with bounding the regret of the algorithm after K episodes. The algorithm we follow takes as input the value of K, and changes the magnitudes of b T N based on it. We will make use of a corollary to Azuma's inequality multiple times during the proof. Theorem 2. (Azuma's Inequality). Let Z 0,..., Z n be a martingale sequence of random variables such that ∀i∃c i: |Z i − Z i−1 | < c i almost surely, then: Corollary 1. Let Z 0,..., Z n be a martingale sequence of random variables such that ∀i∃c i: |Z i − Z i−1 | < c i almost surely, then with probability at least 1 − δ: (1 − η j) The following properties hold for η i N: Lemma 2. Adapted slightly from Define V For any (s, a, t) ∈ S × A × [H], episode k ≤ K and N = N (s, a, t). Suppose (s, a) was previously taken at step t of episodes k 1,..., k N < k. Then: Proof. We have the following recursive formula for Q + at episode k and timestep t: We can produce a similar formula for Q *: From the Bellman Optimality Equation By definition of P and, with probability at least 1 − δ, the following holds simultaneously for all (s, a, t, k) where N = N (s, a, t) and k 1,..., k N < k are the episodes where (s, a) was taken at step t. Proof. For each fixed (s, a, t) ∈ S × A × [H], let k 0 = 0 and k i = min({k ∈ [K]|k > k i−1, (s k t, a k t) = (s, a)} ∪ {K + 1}) k i is then the episode at which (s, a) was taken at step t for the ith time, or k i = K + 1 if it has been taken fewer than i times. Then the random variable k i is a stopping time. Let F i be the σ-field generated by all the random variables until episode k i step t. Then by Azuma's Inequality we have that with probability at least 1 − 2δ/(SAHK) Then by a Union bound over all τ ∈ [K], we have that with probability at least 1 − 2δ/(SAH): Since From Lemma 1 we have that Since inequality equation 9 holds for all fixed τ ∈ [K] uniformly, it also holds for a random variable τ τ = N = N k (s, a, t) ≤ K. Also note that 1[k i ≤ K] = 1 for all i ≤ N. We can then additionally apply a union bound over all s ∈ S, a ∈ A, t ∈ [H] to give: which holds with probability 1 − 2δ for all (s, a, t, k) ∈ S × A × [H] × [K]. We then rescale δ to δ/2. By Lemma 1 we have that b Note on stochastic rewards: We have assumed so far that the reward function is deterministic for a simpler presentation of the proof. If we allowed for a stochastic reward function, the previous lemmas can be easily adapted to allow for it. Lemma 2 would give us: We will now prove that our algorithm is provably efficient. We will do this by showing it has sub-linear regret, following closely the proof of Theorem 1 from . Theorem 1. For any p ∈, with probability at least 1 − p the total regret of Q + is at most O(H 4 SAT log(SAT /p)) for M ≥ 1 and at most O(H 1+M SAT 1−M + H 4 SAT log(SAT /p)) for 0 < M < 1. | We augment the Q-value estimates with a count-based bonus that ensures optimism during action selection and bootstrapping, even if the Q-value estimates are pessimistic. | 627 | scitldr |
Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy - that is, succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as either off-policy Q-learning, or on-policy policy gradient methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds. We explore how to design an efficient and stable model-free deep RL algorithm for continuous state and action spaces. To that end, we draw on the maximum entropy framework, which augments the standard maximum reward reinforcement learning objective with an entropy maximization term BID31 BID28 BID20 BID4 BID8. Maximum entropy reinforcement learning alters the RL objective, though the original objective can be recovered by using a temperature parameter BID8. More importantly, the maximum entropy formulation provides a substantial improvement in exploration and robustness: as discussed by BID30, maximum entropy policies are robust in the face of modeling and estimation errors, and as demonstrated by BID8, they improve exploration by acquiring diverse behaviors. Prior work has proposed model-free deep RL algorithms that perform on-policy learning with entropy maximization BID17, as well as off-policy methods based on soft Q-learning and its variants BID22 BID15 BID8. However, the on-policy variants suffer from poor sample complexity for the reasons discussed above, while the off-policy variants require complex approximate inference procedures in continuous action spaces. In this paper, we demonstrate that we can devise an off-policy maximum entropy actor-critic algorithm, which we call soft actor-critic, which provides for both sample-efficient learning and stability. This algorithm extends readily to very complex, high-dimensional tasks, such as the 21-DoF Humanoid benchmark BID3, where off-policy methods such as DDPG typically struggle to obtain good BID6, while avoiding the complexity and potential instability associated with approximate inference in prior off-policy maximum entropy algorithms based on soft Q-learning BID8. In particular, we present a novel convergence proof for policy iteration in the maximum entropy framework. We then introduce a new algorithm based on an approximation to this procedure that can be practically implemented with deep neural networks, which we call soft actor-critic. We present empirical that show that soft actor-critic attains a substantial improvement in both performance and sample efficiency over both off-policy and on-policy prior methods. Our soft actor-critic algorithm incorporates three key ingredients: an actor-critic architecture with separate policy and value function networks, an off-policy formulation that enables reuse of previously collected data for efficiency, and entropy maximization to enable stability and exploration. We review prior works that draw on some of these ideas in this section. Actor-critic algorithms are typically derived starting from policy iteration, which alternates between policy evaluation-computing the value function for a policy-and policy improvement-using the value function to obtain a better policy BID0 BID25. In large-scale reinforcement learning problems, it is typically impractical to run either of these steps to convergence, and instead the value function and policy are optimized jointly. In this case, the policy is referred to as the actor, and the value function as the critic. Many actor-critic algorithms build on the standard, on-policy policy gradient formulation to update the actor BID18, and many of them also consider the entropy of the policy, but instead of maximizing the entropy, they use it as an regularizer BID21 BID5. This tends to improve stability, but in very poor sample complexity. There have been efforts to increase the sample efficiency while retaining the robustness properties by incorporating off-policy samples and by using higher order variance reduction techniques BID17 BID6. However, fully off-policy algorithms still attain better efficiency. A particularly popular off-policy actor-critic method, , which is a deep variant of the deterministic policy gradient BID23 algorithm, uses a Q-function estimator to enable off-policy learning, and a deterministic actor that maximizes this Q-function. As such, this method can be viewed both as a deterministic actor-critic algorithm and an approximate Q-learning algorithm. Unfortunately, the interplay between the deterministic actor network and the Q-function typically makes DDPG extremely difficult to stabilize and brittle to hyperparameter settings BID3 BID9. As a consequence, it is difficult to extend DDPG to very complex, high-dimensional tasks, and on-policy policy gradient methods still tend to produce the best in such settings BID6. Our method instead combines off-policy actor-critic training with a stochastic actor, and further aims to maximize the entropy of this actor with an entropy maximization objective. We find that this actually in a substantially more stable and scalable algorithm that, in practice, exceeds both the efficiency and final performance of DDPG.Maximum entropy reinforcement learning optimizes policies to maximize both the expected return and the expected entropy of the policy. This framework has been used in many contexts, from inverse reinforcement learning BID31 to optimal control BID27 BID28 BID20. In guided policy search BID10, maximum entropy distribution is used to guide policy learning towards high-reward regions. More recently, several papers have noted the connection between Q-learning and policy gradient methods in the framework of maximum entropy learning BID17 BID8 BID15 BID22. While most of the prior works assume a discrete action space, BID16 approximate the maximum entropy distribution with a Gaussian and BID8 with a sampling network trained to draw samples from the optimal policy. Although the soft Q-learning algorithm proposed by BID8 has a value function and actor network, it is not a true actor-critic algorithm: the Q-function is estimating the optimal Q-function, and the actor does not directly affect the Q-function except through the data distribution. Hence, BID8 motivates the actor network as an approximate sampler, rather than the actor in an actor-critic algorithm. Crucially, the convergence of this method hinges on how well this sampler approximates the true posterior. In contrast, we prove that our method converges to the optimal policy from a given policy class, regardless of the policy parameterization. Furthermore, these previously proposed maximum entropy methods generally do not exceed the performance of state-of-the-art off-policy algorithms, such as DDPG, when learning from scratch, though they may have other benefits, such as improved exploration and ease of finetuning. In our experiments, we demonstrate that our soft actor-critic algorithm does in fact exceed the performance of state-of-theart off-policy deep RL methods by a wide margin. In this section, we introduce notation and summarize the standard and maximum entropy reinforcement learning frameworks. We address policy learning in continuous action spaces. To that end, we consider infinite-horizon Markov decision processes (MDP), defined by the tuple (S, A, p s, r), where the state space S and the action space A are assumed to be continuous, and the unknown state transition probability p s: S × S × A → [0, ∞) represents the probability density of the next state s t+1 ∈ S given the current state s t ∈ S and action a t ∈ A. The environment emits a bounded reward r: S × A → [r min, r max] on each transition, which we will abbreviate as r t r(s t, a t) to simplify notation. We will also use ρ π (s t) and ρ π (s t, a t) to denote the state and state-action marginals of the trajectory distribution induced by a policy π(a t |s t). The standard objective used in reinforcement learning is to maximize the expected sum of rewards t E (st,at)∼ρπ [r t]. We will consider a more general maximum entropy objective (see e.g. BID30), which favors stochastic policies by augmenting the objective with the expected entropy of the policy over ρ π (s t): DISPLAYFORM0 The temperature parameter α determines the relative importance of the entropy term against the reward, and thus controls the stochasticity of the optimal policy. The maximum entropy objective differs from the standard maximum expected reward objective used in conventional reinforcement learning, though the conventional objective can be recovered in the limit as α → 0. For the rest of this paper, we will omit writing the temperature explicitly, as it can always be subsumed into the reward by scaling it by α −1. The maximum entropy objective has a number of conceptual and practical advantages. First, the policy is incentivized to explore more widely, while giving up on clearly unpromising avenues. Second, the policy can capture multiple modes of near-optimal behavior. In particular, in problem settings where multiple actions seem equally attractive, the policy will commit equal probability mass to those actions. Lastly, prior work has observed substantially improved exploration from this objective BID8 BID22, and in our experiments, we observe that it considerably improves learning speed over state-of-art methods that optimize the conventional objective function. We can extend the objective to infinite horizon problems by introducing a discount factor γ to ensure that the sum of expected rewards and entropies is finite. Writing down the precise maximum entropy objective for the infinite horizon discounted case is more involved BID26 and is deferred to Appendix A.Prior methods have proposed directly solving for the optimal Q-function, from which the optimal policy can be recovered BID31 BID4 BID8. In the next section, we will discuss how we can devise a soft actor-critic algorithm through a policy iteration formulation, where we instead evaluate the Q-function of the current policy and update the policy through an off-policy gradient update. Though such algorithms have previously been proposed for conventional reinforcement learning, our method is, to our knowledge, the first off-policy actor-critic method in the maximum entropy reinforcement learning framework. Our off-policy soft actor-critic algorithm can be derived starting from a novel maximum entropy variant of the policy iteration method. In this section, we will first present this derivation, verify that the corresponding algorithm converges to the optimal policy from its density class, and then present a practical deep reinforcement learning algorithm based on this theory. We will begin by deriving soft policy iteration, a general algorithm for learning optimal maximum entropy policies that alternates between policy evaluation and policy improvement in the maximum entropy framework. Our derivation is based on a tabular setting, to enable theoretical analysis and convergence guarantees, and we extend this method into the general continuous setting in the next section. We will show that soft policy iteration converges to the optimal policy within a set of policies which might correspond, for instance, to a set of parameterized densities. In the policy evaluation step of soft policy iteration, we wish to compute the value of a policy π according to the maximum entropy objective in Equation 1. For a fixed policy, the soft Q-value can be computed iteratively, by starting from any function Q: S × A → R and iteratively applying a modified version of the Bellman backup operator T π defined by DISPLAYFORM0 where DISPLAYFORM1 is the soft state value function. We can find the soft value of an arbitrary policy π by repeatingly applying T π as formalized in Lemma 1.Lemma 1 (Soft Policy Evaluation). Consider the soft Bellman backup operator T π in Equation 2 and a mapping Q 0: S × A → R, and define DISPLAYFORM2 Then the sequence Q k will converge to the soft Q-value of π as k → ∞.Proof. See Appendix B.1.In the policy improvement step, we update the policy towards the exponential of the new Q-function. This particular choice of update can be guaranteed to into an improved policy in terms of its soft value, as we show in this section. Since in practice we prefer policies that are tractable, we will additionally restrict the policy to some set of policies Π, which can correspond, for example, to a parameterized family of distributions such as Gaussians. To account for the constraint that π ∈ Π, we project the improved policy into the desired set of policies. While in principle we could choose any projection, it will turn out to be convenient to use the information projection defined in terms of the Kullback-Leibler divergence. In the other words, in the policy improvement step, for each state, we update the policy according to DISPLAYFORM3 The partition function Z π old (s t) normalizes the second KL argument, and while it is intractable in general, it does not contribute to the gradient with respect to the new policy and can thus be ignored as noted in the next section. For this choice of projection, we can show that the new, projected policy has a higher value than the old policy with respect to the objective in Equation 1. We formalize this in Lemma 2. Lemma 2 (Soft Policy Improvement). Let π old ∈ Π and let π new be the optimizer of the minimization problem defined in Equation 4. Then DISPLAYFORM4 Proof. See Appendix B.2.The full soft policy iteration algorithm alternates between the soft policy evaluation and the soft policy improvement steps, and it will provably converge to the optimal maximum entropy policy among the policies in Π (Theorem 1). Although this algorithm will provably find the optimal solution, we can perform it in its exact form only in the tabular case. Therefore, we will next approximate the algorithm for continuous domains, where we need to rely on a function approximator to represent the Q-values, and running the two steps until convergence would be computationally too expensive. The approximation gives rise to a new practical algorithm, called soft actor-critic. Theorem 1 (Soft Policy Iteration). Repeated application of soft policy evaluation and soft policy improvement to any π ∈ Π converges to a policy π DISPLAYFORM5 Proof. See Appendix B.3. As discussed above, large, continuous domains require us to derive a practical approximation to soft policy iteration. To that end, we will use function approximators for both the Q-function and policy, and instead of running evaluation and improvement to convergence, alternate between optimizing both networks with stochastic gradient descent. For the remainder of this paper, we will consider a parameterized state value function V ψ (s t), soft Q-function Q θ (s t, a t), and a tractable policy π φ (a t |s t). The parameters of these networks are ψ, θ, and φ. In the following, we will derive update rules for these parameter vectors. The state value function approximates the soft value. There is no need in principle to include a separate function approximator for the state value, since it is related to the Q-function and policy according to Equation 3. This quantity can be estimated from a single action sample from the current policy without introducing a bias, but in practice, including a separate function approximator for the soft value can stabilize training-and as discussed later, can be used as a state-dependent baseline for learning the policy-and is convenient to train simultaneously with the other networks. The soft value function is trained to minimize the squared residual error DISPLAYFORM0 where D is the distribution of previously sampled states and actions, or a replay buffer. The gradient of Equation 5 can be estimated with an unbiased estimator DISPLAYFORM1 where the actions are sampled according to the current policy, instead of the replay buffer. The soft Q-function parameters can be trained to minimize the soft Bellman residual DISPLAYFORM2 which again can be optimized with stochastic unbiased gradientŝ DISPLAYFORM3 The update makes use of a target value network Vψ whereψ is an exponentially moving average of the value network weights, which has been shown to stabilize training BID13, although we found soft actor-critic to be able to learn robustly also in the absense of a target network. Finally, the policy parameters can be learned by directly minimizing the KL-divergence in Equation 4, which we reproduce here in parametrized form for completeness DISPLAYFORM4 There are several options for minimizing J π, depending on the choice of the policy class. For simple distributions, such as Gaussians, we can use the reparametrization trick, which allows us to backpropagate the gradient from the critic network and leads to a DDPG-style estimator. However, if the policy depends on discrete latent variables, such as is the case for mixture models, the reparametrization trick cannot be used. We therefore propose to use a likelihood ratio gradient estimator: DISPLAYFORM5 where b(s t) is a state-dependent baseline BID18. We can approximately center the learning signal and eliminate the intractable log-partition function by choosing b(s t) = V ψ (s t) − log Z θ (s t) − 1, which yields the final gradient estimator for each gradient step do DISPLAYFORM6 DISPLAYFORM7 The complete algorithm is described in Algorithm 1. The method alternates between collecting experience from the environment with the current policy and updating the function approximators using the stochastic gradients on batches sampled from a replay buffer. In practice we found a combination of a single environment step and multiple gradient steps to work best (see Appendix E). Using off-policy data from a replay buffer is feasible because both value estimators and the policy can be trained entirely on offpolicy data. The algorithm is agnostic to the parameterization of the policy, as long as it can be evaluated for any arbitrary state-action tuple. We will next suggest a practical parameterization for the policy, based on Gaussian mixtures. Although we could use a simple policy represented by a Gaussian, as is common in prior work, the maximum entropy framework aims to maximize the randomness of the learned policy. Therefore, a more expressive but still tractable distribution can endow our method with more effective exploration and robustness, which are the typically cited benefits of entropy maximization BID30. To that end, we propose a practical multimodal representation based on a mixture of K Gaussians. This can approximate any distribution to arbitrary precision as K → ∞, but even for practical numbers of mixture elements, it can provide a very expressive distribution in medium-dimensional action spaces. Although the complexity of evaluating or sampling from the ing distribution scales linearly in K, our experiments indicates that a relatively small number of mixture components, typically just two or four, is sufficient to achieve high performance, thus making the algorithm scalable to complex domains with high-dimensional action spaces. We define the policy as DISPLAYFORM0 where DISPLAYFORM1 are the unnormalized mixture weights, means, and covariances, respectively, which all can depend on s t in complex ways if expressed as neural networks. We also apply a squashing function to limit the actions to a bounded interval as explained in Appendix C. Note that, in contrast to soft Q-learning BID8, our algorithm does not assume that the policy can accurately approximate the optimal exponentiated Q-function distribution. The convergence for soft policy iteration holds even for policies that are restricted to a policy class, in contrast to prior methods of this type. The goal of our experimental evaluation is to understand how the sample complexity and stability of our method compares with prior off-policy and on-policy deep reinforcement learning algorithms. To that end, we evaluate on a range of challenging continuous control tasks from the OpenAI gym benchmark suite BID2. For the Swimmer and Humanoid environments, we use the rllab implementations BID3, which have more well-behaved observation spaces. Although the easier tasks in this benchmark suite can be solved by a wide range of different algorithms, the more complex benchmarks, such as the 21-dimensional Humanoid, are exceptionally difficult to solve with off-policy algorithms BID3. The stability of the algorithm also plays a large role in performance: easier tasks make it more practical to tune hyperparameters to achieve good , while the already narrow basins of effective hyperparameters become prohibitively small for the more sensitive algorithms on the most high-dimensional benchmarks, leading to poor performance BID6.In our comparisons, we compare to trust region policy optimization (TRPO) BID21, a stable and effective on-policy policy gradient algorithm; deep deterministic policy gradient (DDGP) BID11, an algorithm that is regarded as one of the more efficient off-policy deep RL methods BID3; as well as soft Q-learning (SQL) BID8, an off-policy algorithm for learning maximum entropy policies. Although DDPG is very efficient, it is also known to be more sensitive to hyperparameter settings, which limits its effectiveness on complex tasks BID6 BID9. FIG0 shows the total average reward of evaluation rollouts during training for the various methods. Exploration noise was turned off for evaluation for DDPG and TRPO. For maximum entropy algorithms, which do not explicitly inject exploration noise, we either evaluated with the exploration noise (SQL) or approximate the maximum a posteriori action by choosing the mean of the mixture component that achieves the highest Q-value (SAC). Performance across tasks. The show that, overall, SAC substantially outperforms DDPG on all of the benchmark tasks in terms of final performance, and learns faster than any of the baselines in most environments. The only exceptions are Swimmer, where DDPG is slightly faster, and Ant-v1, where SQL is initially faster, but plateaus at a lower final performance. On the hardest tasks, Ant-v1 and Humanoid (rllab), DDPG is unable to make any progress, a that is corroborated by prior work BID6 BID3. SAC still learns substantially faster than TRPO on these tasks, likely as a consequence of improved stability. Poor stability is exacerbated by highdimensional and complex tasks. Even though TRPO does not make perceivable progress for some tasks within the range depicted in the figures, it will eventually solve all of them after substantially more iterations. The quantitative attained by SAC in our experiments also compare very favorably to reported by other methods in prior work BID3 BID6 BID9, indicating that both the sample efficiency and final performance of SAC on these benchmark tasks exceeds the state of the art. All hyperparameters used in this experiment for SAC are listed in Appendix E. Sensitivity to random seeds. In FIG1, we show the performance for multiple individual seeds on the HalfCheetah-v1 benchmark of DDPG, TRPO, and SAC (FIG0 shows the average over seeds). The individual seeds attain much more consistent performance with SAC, while DDPG exhibits very high variability across seeds, indicating substantially worse stability. TRPO is unable to make progress within the first 1000 episodes. For SAC, we performed 16 gradient steps between each environment step, which becomes prohibitively slow in terms of wall clock time for longer experiments (in FIG0 we used we used 4 gradient steps). The in the previous section suggest that algorithms based on the maximum entropy principle can outperform conventional RL methods on challenging tasks such as the Ant and Humanoid. In this section, we further examine which particular components of SAC are important for good performance. To that end, we consider several ablations to pinpoint the most essential differences between our method and standard RL methods that do not maximize entropy. We use DDPG as the representative non-entropy-maximizing algorithm, since it is the most structurally similar to SAC. This section examines specific algorithm components, while a study of the sensitivity to hyperparameter settings-most importantly the reward scale and the number of gradient steps per environment step-is deferred to Appendix D.Importance of entropy maximization. The main difference of SAC and DDPG is the inclusion of entropy maximization in the objective. The entropy appears in both the policy and value function. In the policy, it prevents premature convergence of the policy variance (Equation 9). In the value function, it encourages exploration by increase the value of regions of state space that lead to highentropy behavior (Equation 5). FIG2 compares how inclusion of this term in the policy and value updates affects the performance, by removing the entropy from each one in turn. As evident from the figure, including the entropy in the policy objective is crucial, while maximizing the entropy as part of the value function is less important, and can even hinder learning in the beginning of training. Exploration noise. Another difference is the exploration noise: the maximum entropy framework naturally gives rise to Boltzmann exploration for SAC, whereas DDPG uses external OU noise BID29 BID11. For the next experiment, we used a policy with single mixture component and compared Boltzmann exploration to exploration via external OU noise FIG2. We repeated the experiment with the OU noise parameters that we used with DDPG (θ = 0.15, σ = 0.3), and parameters that we found worked best for SAC without Boltzmann exploration (θ = 1, σ = 0.3). The indicate that Boltzmann exploration outperforms external OU noise. Separate value network. Our method also differs from DDPG in that it uses a separate network to predict the state values, in addition to the Q-values. The value network serves two purposes: it is used to bootstrap the TD updates for learning the Q-function (Equation 7), and it serves as a baseline to reduce variance of the policy gradients (Equation 11). The in FIG2 indicates that the value network has an important role in the policy gradient, but only has a minor effect on the TD updates. Nevertheless, the best performance is achieved when the value network is employed for both purposes. From SAC to DDPG. Soft actor-critic closely resembles DDPG with a stochastic actor. To study which of the components that differs between the two algorithms are most important, we evaluated a range of SAC variants by swapping components of SAC with their DDPG counterparts. In FIG3 we evaluate four versions obtained through incremental modifications: 1) original SAC, 2) replaced likelihood ratio policy gradient estimator with an estimator obtained through the reparametrization trick, 3) replaced the stochastic policy with a deterministic policy and Boltzmann exploration with OU noise, and 4) eliminated the separate value network by using directly the Q-function evaluated at the mean action. The figure suggests two interesting points: First, the reparametrization trick yields somewhat faster and more stable learning compared to the use of likelihood policy gradient estimator, though this precludes the use of multimodal mixtures of Gaussians. Second, use of a deterministic policy degrades performance substantially in terms of both average return (thick line) and variance across random seeds (shaded region). This observation indicates that entropy maximization, and particularly the use of a stochastic policy, can make offpolicy deep RL significantly more robust and efficient compared to algorithms that maximizes the expected return, especially with deterministic policies. We presented soft actor-critic (SAC), an off-policy maximum entropy deep reinforcement learning algorithm that provides sample-efficient learning while retaining the benefits of entropy maximization and stability. Our theoretical derive soft policy iteration, which we show to converge to the optimal policy. From this , we can formulate a soft actor-critic algorithm, and we empirically show that it outperforms state-of-the-art model-free deep RL methods, including the off-policy DDPG algorithm and the on-policy TRPO algorithm. In fact, the sample efficiency of this approach actually exceeds that of DDPG by a substantial margin. Our suggest that stochastic, entropy maximizing reinforcement learning algorithms can provide a promising avenue for improved robustness and stability, and further exploration of maximum entropy methods, including methods that incorporate second order information (e.g., trust regions BID21) or more expressive policy classes is an exciting avenue for future work. The exact definition of the discounted maximum entropy objective is complicated by the fact that, when using a discount factor for policy gradient methods, we typically do not discount the state distribution, only the rewards. In that sense, discounted policy gradients typically do not optimize the true discounted objective. Instead, they optimize average reward, with the discount serving to reduce variance, as discussed by BID26. However, we can define the objective that is optimized under a discount factor as DISPLAYFORM0 This objective corresponds to maximizing the discounted expected reward and entropy for future states originating from every state-action tuple (s t, a t) weighted by its probability ρ π under the current policy. B.1 LEMMA 1Lemma 1 (Soft Policy Evaluation). Consider the soft Bellman backup operator T π in Equation 2 and a mapping Q 0: S × A → R, and define Q k+1 = T π Q k. Then the sequence Q k will converge to the soft Q-value of π as k → ∞.Proof. Define the entropy augmented reward as r π (s t, a t) r(s t, a t) + E st+1∼ps [H (π( · |s t+1))] and rewrite the update rule as DISPLAYFORM0 and apply the standard convergence for policy evaluation BID25. Lemma 2 (Soft Policy Improvement). Let π old ∈ Π and let π new be the optimizer of the minimization problem defined in Equation 4. Then Q πnew (s t, a t) ≥ Q π old (s t, a t) for all (s t, a t) ∈ S × A.Proof. Let π old ∈ Π and let Q π old and V π old be the corresponding soft state-action value and soft state value, and let π new be defined as DISPLAYFORM0 It must be the case that J π old (π new ( · |s t)) ≤ J π old (π old ( · |s t)), since we can always choose π new = π old ∈ Π. Hence DISPLAYFORM1 and since partition function Z π old depends only on the state, the inequality reduces to DISPLAYFORM2 Next, consider the soft Bellman equation: DISPLAYFORM3... DISPLAYFORM4 where we have repeatedly expanded Q π old on the RHS by applying the soft Bellman equation and the bound in Equation 17. Convergence to Q πnew follows from Lemma 1. Theorem 1 (Soft Policy Iteration). Repeated application of soft policy evaluation and soft policy improvement to any π ∈ Π converges to a policy π * (· |s t) such that Q π * (s t, a t) > Q π (s t, a t) for all π ∈ Π, π = π * and for all (s t, a t) ∈ S × A.Proof. Let π i be the policy at iteration i. By Lemma 2, the sequence Q πi is monotonically increasing. Since Q π is bounded above for π ∈ Π (both the reward and entropy are bounded), the sequence converges to some π *. We will still need to show that π * is indeed optimal. At convergence, it must be case that J π * (π * ( · |s t)) < J π * (π( · |s t)) for all π ∈ Π, π = π *. Using the same iterative argument as in the proof of Lemma 2, we get Q π * (s t, a t) > Q π (s t, a t) for all (s t, a t) ∈ S × A, that is, the soft value of any other policy in Π is lower than that of the converged policy. Hence π * is optimal in Π. We use an unbounded GMM as the action distribution. However, in practice, the actions needs to be bounded to a finite interval. To that end, we apply an invertible squashing function (tanh) to the GMM samples, and employ the change of variables formula to compute the likelihoods of the bounded actions. In the other words, let u ∈ R D be a random variable and µ(u|s) the corresponding density with infinite support. Then a = tanh(u), where tanh is applied elementwise, is a random variable with support in (−1, 1) with a density given by DISPLAYFORM0 Since the Jacobian da /du = diag(1 − tanh 2 (u)) is diagonal, the log-likelihood has a simple form DISPLAYFORM1 where u i is the i th element of u. This section discusses the sensitivity to hyperparameters. All of the following experiments are on the HalfCheetah-v1 benchmark, and although we found these to be representative in most cases, they may differ between the environments. For other environments, we only swept over the reward scale only. Reward scale. Soft actor-critic is particularly sensitive to the reward magnitude, because it serves the role of the temperature of the energy-based optimal policy and thus controls its stochasticity. FIG4 shows how learning performance changes when the reward scale is varied: For small reward magnitudes, the policy becomes nearly uniform, and consequently the fails to exploit the reward signal, ing in substantial degradation of performance. For large reward magnitudes, the model learns quickly at first, but the policy quickly becomes nearly deterministic, leading to poor exploration and subsequent failure due to lack of adequate exploration. With the right reward magnitude, the model balances exploration and exploitation, leading to fast and stable learning. In practice, we found reward scale to be the only hyperparameter that requires substantial tuning, and since this parameter has a natural interpretation in the maximum entropy framework, this provides good intuition for how to adjust this parameter. Mixture components. We experimented with different numbers of mixture components in the Gaussian mixture policy FIG4 ). We found that the number of components did not affect the learning performance very much, though larger numbers of components consistently attained somewhat better . Policy evaluation. Since SAC converges to stochastic policies, it is generally beneficial to make the final policy deterministic at the end for best performance. For evaluation, we approximate the maximum a posteriori action by choosing the action that maximizes the Q-function among the mixture component means. FIG4 compares training returns to test returns obtained with this strategy. The test returns are substantially better. It should be noted that all of the training curves depict the sum of rewards, which is different from the objective optimized by SAC and SQL which both maximize also the entropy. Target network update. In FIG4 we varied the smoothing constant, τ, for the target value network updates. A value of one corresponds to a hard update where the weights are copied directly at every iteration, and values close to zero correspond to exponentially moving average of the network weights. Interestingly, SAC is relatively insensitive to the smoothing constant: smaller τ tends to yield higher performance at the end with the cost of marginally slower progress in the beginning, but the difference is small. In prior work, the main use of the target network was to stabilize training BID13 BID11, but interestingly, SAC is able to learn stably also when effectively no separate target network is used, but instead the weights are copied directly (τ = 1), although at the cost of minor degradation of performance. Replay buffer size. Next, we experimented with the size of the experience replay buffer FIG4 ), which we found to be important for environments where the optimal policy becomes nearly deterministic at convergence, such as HalfCheetah-v1. For such environments, the exploration noise becomes negligible at the end, making the content of the replay buffer less diverse and ing in overfitting and instability. Use of a higher capacity buffer solves this problem by allowing the method to remember the failures from the beginning of learning, but slows down learning by allocating some of the network capacity to modeling the suboptimal initial experience. Note that a return of just 4800 is considered as the threshold level for solving this benchmark task and that the performance in all cases is well beyond the threshold. We used a buffer size of 10 million samples for HalfCheetah-v1, and 1 million samples for the other environments. Gradient steps per time step. Finally, we experimented with the number of actor and critic gradient steps per time step of the algorithm FIG4 ). Prior work has observed that increasing the number of gradient steps for DDPG tends to improve sample efficiency BID7 BID19, but only up to a point. We found 4 gradient updates per time step to be optimal for DDPG, before we saw degradation of performance. SAC is able to effectively benefit from substantially larger numbers of gradient updates in most environments, with performance increasing steadily until 16 gradient updates per step. In some environments, such as Humanoid, we did not observe as much benefit. The ability to take multiple gradient steps without negatively affecting the algorithm is important especially for learning in the real world, where experience collection is the bottleneck, and it is typical to run the learning script asynchronously in a separate thread. In this experiment, we used a replay buffer size of 1 million samples, and therefore the performance suddenly drops after reaching a return of approximately 10 thousand due to the reason discussed in the previous section. E HYPERPARAMETERS TAB1 lists the common SAC parameters used in the comparative evaluation in FIG0, and TAB2 lists the parameters that varied across the environments. | We propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. | 628 | scitldr |
In many settings, it is desirable to learn decision-making and control policies through learning or from expert demonstrations. The most common approaches under this framework are Behaviour Cloning (BC), and Inverse Reinforcement Learning (IRL). Recent methods for IRL have demonstrated the capacity to learn effective policies with access to a very limited set of demonstrations, a scenario in which BC methods often fail. Unfortunately, directly comparing the algorithms for these methods does not provide adequate intuition for understanding this difference in performance. This is the motivating factor for our work. We begin by presenting $f$-MAX, a generalization of AIRL , a state-of-the-art IRL method. $f$-MAX provides grounds for more directly comparing the objectives for LfD. We demonstrate that $f$-MAX, and by inheritance AIRL, is a subset of the cost-regularized IRL framework laid out by. We conclude by empirically evaluating the factors of difference between various LfD objectives in the continuous control domain. Modern advances in reinforcement learning aim to alleviate the need for hand-engineered decisionmaking and control algorithms by designing general purpose methods that learn to optimize provided reward functions. In many cases however, it is either too challenging to optimize a given reward (e.g. due to sparsity of signal), or it is simply impossible to design a reward function that captures the intricate details of desired outcomes. One approach to overcoming such hurdles is Learning from Demonstrations (LfD), where algorithms are provided with expert demonstrations of how to accomplish desired tasks. The most common approaches in the LfD framework are Behaviour Cloning (BC) and Inverse Reinforcement Learning (IRL) BID22 BID15. In standard BC, learning from demonstrations is treated as a supervised learning problem and policies are trained to regress expert actions from a dataset of expert demonstrations. Other forms of Behaviour Cloning, such as DAgger BID21, consider how to make use of an expert in a more optimal fashion. On the other hand, in IRL the aim is to infer the reward function of the expert, and subsequently train a policy to optimize this reward. The motivation for IRL stems from the intuition that the reward function is the most concise and portable representation of a task BID15 BID0.Unfortunately, the standard IRL formulation BID15 faces degeneracy issues 1. A successful framework for overcoming such challenges is the Maximum-Entropy (Max-Ent) IRL method BID28 BID27. A line of research stemming from the Max-Ent IRL framework has lead to recent "adversarial" methods BID12 BID4 BID7 1 for example, any policy is optimal for the constant reward function r(s, a) = 0 2 Consider a Markov Decision Process (MDP) represented as a tuple (S, A, P, r, ρ 0, γ) with statespace S, action-space A, dynamics P: S × A × S →, reward function r(s, a), initial state distribution ρ 0, and discount factor γ ∈. In Maximum Entropy (Max-Ent) reinforcement learning BID23 BID25 BID20 BID6 BID10, the goal is to find a policy π such that trajectories sampled using this policy follow the distribution DISPLAYFORM0 where τ = (s 0, a 0, s 1, a 1, ...) denotes a trajectory, and R(τ) = t r(s t, a t) and Z is the partition function. Hence, trajectories that accumulate more reward are exponentially more likely to be sampled. Converse to the standard RL setting, in Max-Ent IRL BID28 BID27 we are instead presented with an optimal policy π exp, or more realistically, sample trajectories from such a policy, and we seek to find a reward function r that maximizes the likelihood of the trajectories sampled from π exp. Formally, our objective is: DISPLAYFORM1 Being an energy-based modelling objective, the difficulty in performing this optimization arises from estimating the partition function Z. Initial methods addressed this problem using dynamic programming BID28 BID27, and recent approaches present methods aimed at intractable domains with unknown dynamics BID5 BID12 BID4 BID7 BID13.Instead of recovering the expert's reward function and policy, recent successful methods in Max-Ent IRL aim to directly recover the policy that would from the full process. Since such methods only recover the policy, it would be more accurate to refer to them as Imitation Learning algorithms. However, to avoid confusion with Behaviour Cloning methods, in this work we will refer to them as direct methods for Max-Ent IRL.GAIL: Generative Adversarial Imitation Learning Before describing the work of BID12, we establish the definition of causal entropy BID2. Intuitively, causal entropy can be thought of as the "amount of options" the policy has in each state, in expectation. DISPLAYFORM2 Let C denote a class of cost functions (negative reward functions).Furthermore, let ρ exp (s, a), ρ π (s, a) denote the state-action marginal distributions of the expert and student policy respectively. BID12 begin with a regularized Max-Ent IRL objective, DISPLAYFORM3 where ψ: C → R is a convex regularization function on the space of cost functions, and IRL ψ (π exp) returns the optimal cost function given the expert and choice of regularization. Also, while not immediately clear, note that DISPLAYFORM4, be a function that returns the optimal Max-Ent policy given cost c(s, a). BID12 show that DISPLAYFORM5 where ψ * denotes the convex conjugate of ψ. This tells us that if we were to find the cost function c(s, a) using the regularized Max-Ent IRL objective 3, and subsequently find the optimal Max-Ent policy for this cost, we would arrive at the same policy had we directly optimized objective 4 by searching for the policy. Directly optimizing 4 is challenging for many choices of ψ. Interestingly however, BID12 show that for any symmetric f -divergences BID14, there exists a choice of ψ such that equation 4 is equivalent to RL ) ). In such settings, due to a close connection between binary classifiers and symmetric f -divergences BID16, efficient algorithms can be formed. DISPLAYFORM6 The special case for Jensen-Shannon divergence leads to the successful method dubbed Generative Adversarial Imitation Learning (GAIL). As before, let ρ exp (s, a), ρ π (s, a) denote the state-action marginal distributions of the expert and student policy respectively. Let D(s, a): S × A → be a binary classifier -often referred to as the discriminator -for identifying positive samples (sampled from ρ exp (s, a)) from negative samples (sampled from ρ π (s, a)). Using RL, the student policy is trained to maximize E τ ∼π [t log D(s t, a t)] − λH causal (π), where λ is a hyperparameter. The training procedure alternates between optimizing the discriminator and updating the policy. As noted, it is shown that this training procedure minimizes the Jensen-Shannon divergence between ρ exp (s, a) and ρ π (s, a) BID12.AIRL: Adversarial Inverse Reinforcement Learning Subsequent to the advent of GAIL BID12, BID4 present a theoretical discussion relating Generative Adversarial Networks (GANs) BID8, IRL, and energy-based models. They demonstrate how an adversarial training approach could recover the Max-Ent reward function and simultaneously train the Max-Ent policy corresponding to that reward. Building on this discussion, BID7 present a practical implementation of this method, named Adversarial Inverse Reinforcement Learning (AIRL).As before, let ρ exp (s, a), ρ π (s, a) denote the state-action marginal distributions of the expert and student policy respectively and let D(s, a): S × A → be the discriminator. In AIRL, the discriminator is parameterized as, DISPLAYFORM7 where f (s, a): S × A → R, and π(a|s) denotes the likelihood of the action under the policy. AIRL defines the reward function, r(s, a):= log D(s, a) − log (1 − D(s, a)), and sets the objective for the student policy to be the RL objective, max π E τ ∼π [t r(s t, a t)]. As in GAIL, this leads to an iterative optimization process alternating between optimizing the discriminator and the policy. At convergence, the advantage function of the expert is recovered. Given this observation, important considerations are made regarding how to extract the true reward function from f (s, a). When the objective is only to perform Imitation Learning, and we do not care to recover the reward function, the discriminator does not use the special parameterization discussed above and is instead direclty represented as a function D(s, a): S × A →, as done in GAIL BID12.Performance With Respect to BC Methods such as GAIL and AIRL have demonstrated significant performance gains compared to Behaviour Cloning. In particular, in standard Mujoco benchmarks BID24 BID3, adversarial methods for Max-Ent IRL achieve strong performance using a very limited amount of demonstrations from an expert policy, an important failure scenario for standard Behaviour Cloning. Ho & Ermon FORMULA0 demonstrate that Max-Ent IRL is the dual problem of matching ρ π (s, a) to ρ exp (s, a); indeed as noted above, GAIL BID12 optimizes the Jensen-Shannon divergence between the two distributions. In section 3 we present f -MAX, a method for matching ρ π (s, a) to ρ exp (s, a) using arbitrary f -divergences BID14. Hence, in this section we recall this class of statistical divergences as well as methods for using them for training generative models. Let P, Q be two distributions with density functions p, q. For any convex, lower-semicontinuous function f: R + → R a statistical divergence can be defined as: DISPLAYFORM0 q(x). Divergences derived in this manner are called f-divergences and amongst many interesting divergences, include the forward and reverse KL. BID17 present a variational estimation method for f -divergences between arbitrary distributions P, Q. Using the notation of BID18 we can write, DISPLAYFORM1 where T is an arbitrary class of functions T ω: X → R, and f * is the convex conjugate of f. Under mild conditions BID17 equality holds between the two sides. Motivated by this variational approximation as well as Generative Adversarial Networks (GANs) BID8, BID18 present an iterative optimization scheme for matching an implicit distribution 2 Q to a fixed distribution P using any f -divergence. For a given f -divergence, the corresponding minimax optimization is, DISPLAYFORM2 Nowozin et al. FORMULA0 discuss practical parameterizations of T ω, but to avoid notational clutter we will use the form above. DISPLAYFORM3 We begin by presenting f -MAX, a generalization of AIRL BID7 which provides a more intuitive interpretation of what similar algorithms accomplish. Imagine, for some f, we aim to train a policy by optimizing the f -divergence DISPLAYFORM4 To do so, we propose the following iterative optimization procedure, DISPLAYFORM5 DISPLAYFORM6 where f * and T ω are as defined in section 2.2. Equation 8 is the same as the inner maximization of the f -GAN objective in equation 7; this objective optimizes T ω so that equation 8 best approximates DISPLAYFORM7 On the other hand, for the policy objective, using the identities in appendix A we have, DISPLAYFORM8 which implies that the policy objective is equivalent to minimizing equation 8 with respect to π. With an identical proof as in Goodfellow et al. (2014, Proposition 2), if in each iteration the optimal T ω is found, the described optimization procedure converges to the global optimum where the policy's state-action marginal distribution matches that of the expert's. This is equivalent to iteratively computing D f (ρ exp (s, a)||ρ π (s, a)) and optimiizing the policy to minimize it. Choosing DISPLAYFORM0 ). This divergence is commonly referred to as the "reverse" KL divergence. In this setting we have, BID18. Hence, given T π ω, the policy objective in equation 9 takes the form, DISPLAYFORM1 DISPLAYFORM2 On the other hand, plugging the optimal discriminator BID8 into the AIRL BID7 policy objective, we get, DISPLAYFORM3 DISPLAYFORM4 As can be seen, the right hand side of equation 12 matches that of equation 11 up to a constant 3, meaning that AIRL is solving the Max-Ent IRL problem by minimizing the reverse KL divergence, DISPLAYFORM5 As discussed above, present a class of methods for Max-Ent IRL that directly retrieve the expert policy without explicitly finding the reward function of the expert (sec. 2.1).Using an interesting connection between surrogate cost functions for binary classification and fdivergences BID16 ), BID12 derive a special case of their method for minimizing any symmetric 4 f -divergence between ρ exp (s, a) and ρ π (s, a). Choosing the symmetric f -divergence to be the Jensen-Shannon divergence leads to the successful special case, GAIL (sec 2.1).Surprisingly, we now show that f -MAX is a subset of the cost-regularized Max-Ent IRL framework laid out in BID12! Recall the following equations from this framework, DISPLAYFORM6 DISPLAYFORM7 Method Optimized Objective (Minimization) DISPLAYFORM8 is the expert, and ρ DISPLAYFORM9 where ψ(c): C → R was a closed, proper, and convex regularization function on the space of cost function, and ψ * its convex conjugate. For our proof we will operate in the finite state-action space, as in the original work BID12. In this setting, cost functions can be represented as vectors in R S×A, and joint state-action distributions can be represented as vectors in S×A. Let f be the function defining some fdivergence. Given the expert for the task, we can define the following cost function regularizer, DISPLAYFORM10 where f * is the convex conjugate of f. Given this choice, with simple algebraic manipulation done in appendix B we have, DISPLAYFORM11 DISPLAYFORM12 Typically, the causal entropy term is considered a policy regularizer, and is weighted by 0 ≤ λ ≤ 1. Therefore, modulo the term H causal (π), our derivations show that f -MAX, and by inheritance AIRL BID7, all fall under the cost-regularized Max-Ent IRL framework of BID12 Given derived in the prior section, we can now begin to populate table 1, writing various Imitation Learning algorithms in a common form, as the minimization of some statistical divergence between ρ exp (s, a) and ρ π (s, a). In Behaviour Cloning we minimize DISPLAYFORM0 On the other hand, the corollary in section 3.1 demonstrates that AIRL BID7 minimizes KL (ρ π (s, a)||ρ exp (s, a)), while GAIL BID12 DISPLAYFORM1 Hence, there are two ways in which the direct IRL methods differ from BC. First, in standard BC the policy is optimized to match the conditional distribution ρ exp (a|s), whereas in the other two the policy is explicitly encouraged to match the marginal state distributions as well. Second, in BC we make use of the forward KL divergence, whereas AIRL and GAIL use divergences that exhibit more mode-seeking behaviour. These observations allow us to generate the following two hypotheses about why direct IRL methods outperform BC, particularly in the low-data regime, Hypothesis 1 In common MDPs of interest, the reward function depends more on the state than the action. Hence it is plausible that matching state marginals is more useful than matching action conditional marginals. Hypothesis 2 It is known that optimization using the forward KL divergence in distributions with a mode-covering behaviour, whereas using the reverse KL in modeseeking behaviour BID1. Therefore, since in Reinforcement Learning we care about the "quality of trajectories", being mode-seeking is more beneficial than mode-covering, particularly in the low-data regime. In what follows, we seek to experimentally evaluate our hypotheses. To tease apart the differences between the direct Max-Ent IRL methods and BC, we present an algorithm that optimizes KL (ρ exp (s, a)||ρ π (s, a)). We then compare its performance to Behaviour Cloning and the standard AIRL algorithm using varying amounts of expert demonstrations. While f -MAX is a general algorithm, useful for most choices of f, it unfortunately cannot be used for the special case of forward KL, i.e. KL (ρ exp (s, a)||ρ π (s, a)). In the following sections we identify the problem and present a separate direct Max-Ent IRL method that optimizes this divergence. Let T π ω denote the maximizer of equation 8 for a given policy π. For the case of forward KL, drawing upon equations from BID18 we have, DISPLAYFORM0 Given this, the objective for the policy (equation 9) under the optimal T π ω becomes, DISPLAYFORM1 Hence, there is no signal to train the policy! 6 In this section we derive an algorithm for optimizing KL (ρ exp (s, a)||ρ π (s, a)). Similar to AIRL BID7, let us have a discriminator, D(s, a) whose objective is to discriminate between expert and policy state-action pairs, DISPLAYFORM0 Figure 1: r(s, a) as the function of the logits of the optimal discriminator, π (s, a) = log DISPLAYFORM1 We now define the objective for the policy to be, DISPLAYFORM2 In appendix C we show, DISPLAYFORM3 This is a refreshing since it demonstrates that we can convert the AIRL algorithm BID7 into its forward KL counterpart by simply modifying the reward function used; in AIRL (reverse KL) the reward is defined as r(s, a):= log D(s, a) − log (1 − D(s, a)), whereas for forward KL it is defined as r(s, a):= s,a). We refer to this forward KL version of AIRL as FAIRL. DISPLAYFORM4 If we parameterize the discriminator as D(s, a):= σ ((s, a) ), where σ represents the sigmoid activation function, the logit of the discriminator, (s, a), is equal to log D(s, a) − log (1 − D(s, a) ). Hence, for an optimal discriminator, D π, we have π (s, a) = log s,a). It is instructive to plot the reward functions under the two different settings as a function of π (s, a); figure 1 presents these plots. As can be seen, in the forward KL version of AIRL, if for a state-action pair the expert puts more probability mass than the policy, the policy is severely punished. However, if for some stateaction pairs the policy places a lot more mass than the expert, it almost does not matter. As a , the policy spreads its mass. On the other hand, in the original AIRL formulation (reverse KL), the policy is always encouraged to put less mass than the expert. These observations are in line with standard intuitions about the mode-covering/mode-seeking behaviours of the two KL divergences BID1. DISPLAYFORM5 In this section we provide empirical comparisons between AIRL, FAIRL, and standard BC in the Ant and Halfcheetah environments found in Open-AI Gym BID3. Figure 2: Average return on 50 evaluation trajectories as a function of number of expert demonstrations (higher is better). Models evaluated deterministically. As we ran two seeds per experiment, we do not present standard deviations. While FAIRL performs comparably to AIRL, Behaviour cloning lags behind quite significantly. Considering the form of their objectives (table 1), this demonstrates that the advantage of direct Max-Ent IRL methods over BC is a of the additional aspect of their objectives explicitly matching marginal state distributions. Expert Policy To simulate access to expert demonstrations we train an expert policy using SoftActor-Critic (SAC) BID11, a state-of-the-art reinforcement learning algorithm for continuous control. The expert policy consists of a 2-layer MLP with 256-dim layers, ReLU activations, and two output streams for the mean and the diagonal covariance of a Tanh(Normal(µ, σ)) distribution 7. We use the default hyperparameter settings for training the expert. Evaluation Setup Using a trained expert policy, we generated 4 sets of expert demonstrations of that contain {4, 8, 16, 32} trajectories. Starting from a random offset, each trajectory is subsampled by a factor of 20. This is standard protocol employed in prior direct methods for Max-Ent IRL BID12 BID7. Also note that when generating demonstrations we sample from the expert's action distribution rather than taking the mode. This way, since the expert was trained using Soft-Actor-Critic, the expert should correspond to the Max-Ent optimal policy for the reward function 1 τ r g (s, a), where τ is the SAC temperature used and r g (s, a) is the groundtruth reward function. To compare the various learning-from-demonstration algorithms we train each method at each amount of expert demonstrations using 2 random seeds. For each seed, we checkpoint the model at its best validation loss 8 throughout training. At the end of training, the ing checkpoints are evaluated on 50 test episodes. Details for AIRL & FAIRL For AIRL and FAIRL, the student policy has an identical architecture to that of the expert, and the discriminator is a 2-layer MLP with 256-dim layers and Tanh activations. We normalize the observations from the environment by computing the mean and standard deviations of the expert demonstrations. The RL algorithm used for the student policies is SAC BID11, and the temperature parameter is tuned separately for AIRL & FAIRL.Details for BC For BC, we use an identical architecture as the expert. The model was fit using Maximum Likelihood Estimation 9. As before, the observations from the environment are normalized using the mean and standard deviation of the expert demonstrations. To match state-action marginals, the optimal student policy must sample actions from the stateconditional distribution, π(a|s). On the other hand, when we deploy a trained policy it is reasonable to instead choose the mode of this distribution, which we call the deterministic setting. Here, we present evaluation under the former setting, and defer the for the deterministic setting to the appendix. DISPLAYFORM0 Figure 3: Validation curves throughout training using stochastic evaluation (refer to appendix D for deterministic evaluation ). Top row Ant, Bottom row Halfcheetah. n represents the number of expert demonstrations provided. Due to its mode-covering behaviour, FAIRL does not perform as well as AIRL when evaluated stochastically. However, with determinisitc evaluation FAIRL outperforms AIRL in the Ant environment. Figure 4 demonstrates that both AIRL and FAIRL outperform BC by a large margin, especially in the low data regime. Specifically, the fact that FAIRL outperforms BC supports hypothesis 1 that the performance gain of Max-Ent IRL is not necessarily due to the direction of KL divergence used, but is the of explicitly encouraging the policy to match the marginal state distribution of the expert in addition to the matching of conditional action distribution. To compare AIRL and FAIRL, in figure 3 we plot the validation curves throughout training using stochastic evaluation. Across the two tasks and various number of expert demonstrations, AIRL consistently outperforms FAIRL. When using deterministic evaluation (figure 5), FAIRL achieves a significant performance gain to the point that it outperforms AIRL on the Ant environment across all demonstrations set sizes. Such observations provide initial positive support for hypothesis 2; as more expert demonstrations are provided, the policy trained with FAIRL broadens its distribution to cover the data-distribution, ing in trajectories accumulating less reward in expectation. We note however that more detailed experiments are necessary for adequately comparing the two methods. The motivation for this work stemmed from the superior performance of recent direct Max-Ent IRL methods BID12 BID7 compared to BC in the low-data regime, and the desire to understand the relation between various approaches for Learning from Demonstrations. We first presented f -MAX, a generalization of AIRL BID7, which allowed us to interpret AIRL as optimizing for KL (ρ π (s, a)||ρ exp (s, a)). We demonstrated that f -MAX, and by inhertance AIRL, is a subset of the cost-regularized IRL framework laid out by BID12. Comparing to the standard BC objective, E ρ exp (s) [KL (ρ exp (a|s)||ρ π (a|s))], we hypothesized two reasons for the superior performance of AIRL: 1) the additional terms in the objective encouraging the matching of marginal state distributions, and 2) the direction of the KL divergence being optimized. Setting out to empirically evaluate these claims we presented FAIRL, a one-line modification of the AIRL algorithm that optimizes KL (ρ exp (s, a)||ρ π (s, a)). FAIRL outperformed BC in a similar fashion to AIRL, which allowed us to conclude the key factor being the matching of state marginals. Additional comparisons between FAIRL and AIRL provided initial understanding about the role of the direction of the KL being optimized. In future work we aim to produce on a more diverse set of more challenging environments. Additionally, evaluating other choices of f -divergence beyond forward and reverse KL may present interesting avenues for improvement BID26. Lastly, but importantly, we would like to understand whether the mode-covering behaviour of FAIRL could in more robust policies BID19.A SOME USEFUL IDENTITIES Let h: S × A → R be an arbitrary function. If all episodes have the same length T, we have, DISPLAYFORM0 DISPLAYFORM1 In a somewhat similar fashion, in the infinite horizon case with fixed probability γ ∈ of transitioning to a terminal state, for the discounted sum below we have, DISPLAYFORM2 DISPLAYFORM3 where Γ:= 1 1−γ is the normalizer of the sum t γ t. Since the integral of an infinite series is not always equal to the infinite series of integrals, some analytic considerations must be made to go from equation 34 to 35. But, one simple case in which it holds is when the ranges of h and all ρ π (s t, a t) are bounded. and assuming the discriminator is optimal 10, we have, Figure 4: Average return on 50 evaluation trajectories as a function of number of expert demonstrations (higher is better). Models evaluated stochastically. As we ran two seeds per experiment, we do not present standard deviations. While FAIRL performs comparably to AIRL, Behaviour cloning lags behind quite significantly. Considering the form of their objectives (table 1), this demonstrates that the advantage of direct Max-Ent IRL methods over BC is a of the additional aspect of their objectives explicitly matching marginal state distributions.10 As a reminder, the optimal discriminator has the form, D(s, a) = ρ exp (s,a) ρ exp (s,a)+ρ π (s,a). A simple proof of which can be found in BID8. | Distribution matching through divergence minimization provides a common ground for comparing adversarial Maximum-Entropy Inverse Reinforcement Learning methods to Behaviour Cloning. | 629 | scitldr |
Value-based methods constitute a fundamental methodology in planning and deep reinforcement learning (RL). In this paper, we propose to exploit the underlying structures of the state-action value function, i.e., Q function, for both planning and deep RL. In particular, if the underlying system dynamics lead to some global structures of the Q function, one should be capable of inferring the function better by leveraging such structures. Specifically, we investigate the low-rank structure, which widely exists for big data matrices. We verify empirically the existence of low-rank Q functions in the context of control and deep RL tasks (Atari games). As our key contribution, by leveraging Matrix Estimation (ME) techniques, we propose a general framework to exploit the underlying low-rank structure in Q functions, leading to a more efficient planning procedure for classical control, and additionally, a simple scheme that can be applied to any value-based RL techniques to consistently achieve better performance on''low-rank'' tasks. Extensive experiments on control tasks and Atari games confirm the efficacy of our approach. Value-based methods are widely used in control, planning and reinforcement learning (; ;). To solve a Markov Decision Process (MDP), one common method is value iteration, which finds the optimal value function. This process can be done by iteratively computing and updating the state-action value function, represented by Q(s, a) (i.e., the Q-value function). In simple cases with small state and action spaces, value iteration can be ideal for efficient and accurate planning. However, for modern MDPs, the data that encodes the value function usually lies in thousands or millions of dimensions (; 2019), including images in deep reinforcement learning . These practical constraints significantly hamper the efficiency and applicability of the vanilla value iteration. Yet, the Q-value function is intrinsically induced by the underlying system dynamics. These dynamics are likely to possess some structured forms in various settings, such as being governed by partial differential equations. In addition, states and actions may also contain latent features (e.g., similar states could have similar optimal actions). Thus, it is reasonable to expect the structured dynamic to impose a structure on the Q-value. Since the Q function can be treated as a giant matrix, with rows as states and columns as actions, a structured Q function naturally translates to a structured Q matrix. In this work, we explore the low-rank structures. To check whether low-rank Q matrices are common, we examine the benchmark Atari games, as well as 4 classical stochastic control tasks. As we demonstrate in Sections 3 and 4, more than 40 out of 57 Atari games and all 4 control tasks exhibit low-rank Q matrices. This leads us to a natural question: How do we leverage the low-rank structure in Q matrices to allow value-based techniques to achieve better performance on "low-rank" tasks? We propose a generic framework that allows for exploiting the low-rank structure in both classical planning and modern deep RL. Our scheme leverages Matrix Estimation (ME), a theoretically guaranteed framework for recovering low-rank matrices from noisy or incomplete measurements . In particular, for classical control tasks, we propose Structured Value-based Planning (SVP). For the Q matrix of dimension |S| × |A|, at each value iteration, SVP randomly updates a small portion of the Q(s, a) and employs ME to reconstruct the remaining elements. We show that planning problems can greatly benefit from such a scheme, where much fewer samples (only sample around 20% of (s, a) pairs at each iteration) can achieve almost the same policy as the optimal one. For more advanced deep RL tasks, we extend our intuition and propose Structured Value-based Deep RL (SV-RL), applicable for any value-based methods such as DQN . Here, instead of the full Q matrix, SV-RL naturally focuses on the "sub-matrix", corresponding to the sampled batch of states at the current iteration. For each sampled Q matrix, we again apply ME to represent the deep Q learning target in a structured way, which poses a low rank regularization on this "sub-matrix" throughout the training process, and hence eventually the Q-network's predictions. Intuitively, as learning a deep RL policy is often noisy with high variance, if the task possesses a low-rank property, this scheme will give a clear guidance on the learning space during training, after which a better policy can be anticipated. We confirm that SV-RL indeed can improve the performance of various value-based methods on "low-rank" Atari games: SV-RL consistently achieves higher scores on those games. Interestingly, for complex, "high-rank" games, SV-RL performs comparably. ME naturally seeks solutions that balance low rank and a small reconstruction error (cf. Section 3.1). Such a balance on reconstruction error helps to maintain or only slightly degrade the performance for "high-rank" situation. We summarize our contributions as follows: • We are the first to propose a framework that leverages matrix estimation as a general scheme to exploit the low-rank structures, from planning to deep reinforcement learning. • We demonstrate the effectiveness of our approach on classical stochastic control tasks, where the low-rank structure allows for efficient planning with less computation. • We extend our scheme to deep RL, which is naturally applicable for any value-based techniques. Across a variety of methods, such as DQN, double DQN, and dueling DQN, experimental on all Atari games show that SV-RL can consistently improve the performance of value-based methods, achieving higher scores for tasks when low-rank structures are confirmed to exist. To motivate our method, let us first investigate a toy example which helps to understand the structures within the Q-value function. We consider a simple deterministic MDP, with 1000 states, 100 actions and a deterministic state transition for each action. The reward r(s, a) is randomly generated first for each (s, a) pair, and then fixed throughout. A discount factor γ = 0.95 is used. The deterministic nature imposes a strong relationship among connected states. In this case, our goal is to explore: what kind of structures the Q function may contain; and how to effectively exploit such structures. The Low-rank Structure Under this setting, Q-value could be viewed as a 1000 × 100 matrix. To probe the structure of the Q-value function, we perform the standard Q-value iteration as follows: where s denotes the next state after taking action a at state s. We randomly initialize Q. In Fig. 1, we show the approximate rank of Q (t) and the mean-square error (MSE) between Q (t) and the optimal Q *, during each value iteration. Here, the approximate rank is defined as the first k singular values (denoted by σ) that capture more than 99% variance of all singular values, i.e., k i=1 σ 2 i / j σ 2 j ≥ 0.99. As illustrated in Fig. 1 (a) and 1(b), the standard theory guarantees the convergence to Q *; more interestingly, the converged Q * is of low rank, and the approximate rank of Q (t) drops fast. These observations give a strong evidence for the intrinsic low dimensionality of this toy MDP. Naturally, an algorithm that leverages such structures would be much desired. The previous motivate us to exploit the structures for efficient planning. The idea is simple: If the eventual matrix is low-rank, why not enforcing such a structure throughout the iterations? In other words, with the existence of a global structure, we should be capable of exploiting it during intermediate updates and possibly also regularizing the to be in the same low-rank space. In particular, at each iteration, instead of every (s, a) pair (i.e., Eq.), we would like to only calculate Q (t+1) for some (s, a) pairs and then exploit the low-rank structure to recover the whole Q (t+1) matrix. We choose matrix estimation (ME) as our reconstruction oracle. The reconstructed matrix is often with low rank, and hence regularizing the Q matrix to be low-rank as well. We validate this framework in Fig. 1 (c) and 1(d), where for each iteration, we only randomly sample 50% of the (s, a) pairs, calculate their corresponding Q (t+1) and reconstruct the whole Q (t+1) matrix with ME. Clearly, around 40 iterations, we obtain comparable to the vanilla value iteration. Importantly, this comparable performance only needs to directly compute 50% of the whole Q matrix at each iteration. It is not hard to see that in general, each vanilla value iteration incurs a computation cost of O(|S| 2 |A| 2). The complexity of our method however only scales as O(p|S| 2 |A| 2)+O M E, where p is the percentage of pairs we sample and O M E is the complexity of ME. In general, many ME methods employ SVD as a subroutine, whose complexity is bounded by O(min{|S| 2 |A|, |S||A| 2}) . For low-rank matrices, faster methods can have a complexity of order linear in the dimensions . In other words, our approach improves computational efficiency, especially for modern high-dimensional applications. This overall framework thus appears to be a successful technique: it exploits the low-rank behavior effectively and efficiently when the underlying task indeed possesses such a structure. Having developed the intuition underlying our methodology, we next provide a formal description in Sections 3.1 and 3.2. One natural question is whether such structures and our method are general in more realistic control tasks. Towards this end, we provide further empirical support in Section 3.3. ME considers about recovering a full data matrix, based on potentially incomplete and noisy observations of its elements. Formally, consider an unknown data matrix X ∈ R n×m and a set of observed entries Ω. If the observations are incomplete, it is often assumed that each entry of X is observed independently with probability p ∈. In addition, the observation could be noisy, where the noise is assumed to be mean zero. Given such an observed set Ω, the goal of ME is to produce an estimatorM so that ||M − X|| ≈ 0, under an appropriate matrix norm such as the Frobenius norm. The algorithms in this field are rich. Theoretically, the essential message is: exact or approximate recovery of the data matrix X is guaranteed if X contains some global structure (Candès & ; ;). In the literature, most attention has been focusing on the low-rank structure of a matrix. Correspondingly, there are many provable, practical algorithms to achieve the desired recovery. Early convex optimization methods (Candès &) seek to minimize the nuclear norm, ||M || *, of the estimator. For example, fast algorithms, such as the Soft-Impute algorithm solves the following minimization problem: Since the nuclear norm || · || * is a convex relaxation of the rank, the convex optimization approaches favor solutions that are with small reconstruction errors and in the meantime being relatively low-rank, which are desirable for our applications. Apart from convex optimization, there are also spectral methods and even non-convex optimization approaches (; ;). In this paper, we view ME as a principled reconstruction oracle to effectively exploit the low-rank structure. For faster computation, we mainly employ the Soft-Impute algorithm. We now formally describe our approach, which we refer as structured value-based planning (SVP). Fig. 2 illustrates our overall approach for solving MDP with a known model. The approach is based on the Q-value iteration. At the t-th iteration, instead of a full pass over all state-action pairs: State-action value function Incomplete observation Reconstructed value function Figure 2: An illustration of the proposed SVP algorithm for leveraging low-rank structures. 1. SVP first randomly selects a subset Ω of the state-action pairs. In particular, each state-action pair in S × A is observed (i.e., included in Ω) independently with probability p. 2. For each selected (s, a), the intermediateQ(s, a) is computed based on the Q-value iteration: 3. The current iteration then ends by reconstructing the full Q matrix with matrix estimation, from the set of observations in Ω. That is, Q (t+1) = ME {Q(s, a)} (s,a)∈Ω. Overall, each iteration reduces the computation cost by roughly 1 − p (cf. discussions in Section 2). In Appendix A, we provide the pseudo-code and additionally, a short discussion on the technical difficulty for theoretical analysis. Nevertheless, we believe that the consistent empirical benefits, as will be demonstrated, offer a sounding foundation for future analysis. We empirically evaluate our approach on several classical stochastic control tasks, including the Inverted Pendulum, the Mountain Car, the Double Integrator, and the Cart-Pole. Our objective is to demonstrate, as in the toy example, that if the optimal Q * has a low-rank structure, then the proposed SVP algorithm should be able to exploit the structure for efficient planning. We present the evaluation on Inverted Pendulum, and leave additional on other planning tasks in Appendix B and C. Inverted Pendulum In this classical continuous task, our goal is to balance an inverted pendulum to its upright position, and the performance metric is the average angular deviation. The dynamics is described by the angle and the angular speed, i.e., s = (θ,θ), and the action a is the torque applied. We discretize the task to have 2500 states and 1000 actions, leading to a 2500 × 1000 Q-value matrix. The Low-rank Structure We first verify that the optimal Q * indeed contains the desired low-rank structure. We run the vanilla value iteration until it converges. The converged Q matrix is found to have an approximate rank of 7. For further evidence, in Appendix C, we construct "low-rank" policies directly from the converged Q matrix, and show that the policies maintain the desired performance. The SVP Policy Having verified the structure, we would expect our approach to be effective. To this end, we apply SVP with different observation probability p and fix the overall number of iterations to be the same as the vanilla Q-value iteration. Fig. 3 confirms the success of our approach. Fig. 3(a), 3(b) and 3(c) show the comparison between optimal policy and the final policy based on SVP. We further illustrate the performance metric, the average angular deviation, in Fig. 3(d). Overall, much fewer samples are needed for SVP to achieve a comparable performance to the optimal one. So far, our focus has been on tabular MDPs where value iteration can be applied straightforwardly. However, the idea of exploiting structure is much more powerful: we propose a natural extension of our approach to deep RL. Our scheme again intends to exploit and regularize structures in the Q-value function with ME. As such, it can be seamlessly incorporated into value-based RL techniques that include a Q-network. We demonstrate this on Atari games, across various value-based RL techniques. Before diving into deep RL, let us step back and review the process we took to develop our intuition. Previously, we start by treating the Q-value as a matrix. To exploit the structure, we first verify that certain MDPs have essentially a low-rank Q *. We argue that if this is indeed the case, then enforcing the low-rank structures throughout the iterations, by leveraging ME, should lead to better algorithms. A naive extension of the above reasoning to deep RL immediately fails. In particular, with images as states, the state space is effectively infinitely large, leading to a tall Q matrix with numerous number of rows (states). Verifying the low-rank structure for deep RL hence seems intractable. However, by definition, if a large matrix is low-rank, then almost any row is a linear combination of some other rows. That is, if we sample a small batch of the rows, the ing matrix is most likely low-rank as well. To probe the structure of the deep Q function, it is, therefore, natural to understand the rank of a randomly sampled batch of states. In deep RL, our target for exploring structures is no longer the optimal Q *, which is never available. In fact, like SVP, the natural objective should be the converged values of the underlying algorithm, which in deep scenarios, are the eventually learned Q function. With the above discussions, we now provide evidence for the low-rank structure of learned Q function on some Atari games. We train standard DQN on 4 games, with a batch size of 32. To be consistent, the 4 games all have 18 actions. After the training process, we randomly sample a batch of 32 states, evaluate with the learned Q network and finally synthesize them into a matrix. That is, a 32 × 18 data matrix with rows the batch of states, the columns the actions, and the entries the values from the learned Q network. Note that the rank of such a matrix is at most 18. The above process is repeated for 10,000 times, and the histogram and empirical CDF of the approximate rank is plotted in Fig. 4. Apparently, there is a strong evidence supporting a highly structured low-rank Q function for those games -the approximate ranks are uniformly small; in most cases, they are around or smaller than 3. Having demonstrated the low-rank structure within some deep RL tasks, we naturally seek approaches that exploit the structure during the training process. We extend the same intuitions here: if eventually, the learned Q function is of low rank, then enforcing/regularizing the low rank structure for each iteration of the learning process should similarly lead to efficient learning and potentially better performance. In deep RL, each iteration of the learning process is naturally the SGD step where one would update the Q network. Correspondingly, this suggests us to harness the structure within the batch of states. Following our previous success, we leverage ME to achieve this task. We now formally describe our approach, referred as structured value-based RL (SV-RL). It exploits the structure for the sampled batch at each SGD step, and can be easily incorporated into any Q-value based RL methods that update the Q network via a similar step as in Q-learning. In particular, Q-value based methods have a common model update step via SGD, and we only exploit structure of the sampled batch at this step -the other details pertained to each specific method are left intact. Precisely, when updating the model via SGD, Q-value based methods first sample a batch of B transitions {(s and form the following updating targets: For example, in DQN,Q is the target network. The Q network is then updated by taking a gradient step for the loss function 2, with respect to the parameter θ. To exploit the structure, we then consider reconstructing a matrix Q † fromQ, via ME. The reconstructed Q † will replace the role ofQ in Eq. to form the targets y (i) for the gradient step. In particular, the matrix Q † has a dimension of B × |A|, where the rows represent the "next states" in the batch, the columns represent actions, and the entries are reconstructed values. Let S B {s . The SV-RL alters the SGD update step as illustrated in Algorithm 1 and sample a set Ω of state-action pairs from S B × A. In particular, each state-action pair in S B × A is observed (i.e., included in Ω) with probability p, independently. evaluate every state-action pair in Ω viaQ, whereQ is the network that would be used to form the targets {y in the original value-based methods (cf. Eq.). 6: based on the evaluated values, reconstruct a matrix Q † with ME, i.e., / * new targets with reconstructed Q † for the gradient step * / 8: replaceQ in Eq. with Q † to evaluate the targets {y SV-RL Targets: 9: update the Q network with the original targets replaced by the SV-RL targets. Note the resemblance of the above procedure to that of SVP in Section 3.2. When the full Q matrix is available, in Section 3.2, we sub-sample the Q matrix and then reconstruct the entire matrix. When only a subset of the states (i.e., the batch) is available, naturally, we look at the corresponding sub-matrix of the entire Q matrix, and seek to exploit its structure. We conduct extensive experiments on Atari 2600. We apply SV-RL on three representative value-based RL techniques, i.e., DQN, double DQN and dueling DQN. We fix the total number of training iterations and set all the hyper-parameters to be the same. For each experiment, averaged across multiple runs are reported. Additional details are provided in Appendix D. We present representative of SV-RL applied to the three value-based deep RL techniques in Fig. 6. These games are verified by method mentioned in Section 4.1 to be low-rank. Additional on all Atari games are provided in Appendix E. The figure reveals the following . First, games that possess structures indeed benefit from our approach, earning mean rewards that are strictly higher than the vanilla algorithms across time. More importantly, we observe consistent improvements across different value-based RL techniques. This highlights the important role of the intrinsic structures, which are independent of the specific techniques, and justifies the effectiveness of our approach in consistently exploiting such structures. Further Observations Interestingly however, the performance gains vary from games to games. Specifically, the majority of the games can have benefits, with few games performing similarly or slightly worse. Such observation motivates us to further diagnose SV-RL in the next section. So far, we have demonstrated that games which possess structured Q-value functions can consistently benefit from SV-RL. Obviously however, not all tasks in deep RL would possess such structures. As such, we seek to diagnose and further interpret our approach at scale. Diagnosis We select 4 representative examples (with 18 actions) from all tested games, in which SV-RL performs better on two tasks (i.e., FROSTBITE and KRULL), slightly better on one task (i.e., ALIEN), and slightly worse on the other (i.e., SEAQUEST). The intuitions we developed in Section 4 incentivize us to further check the approximate rank of each game. As shown in Fig. 7, in the two better cases, both games are verified to be approximately low-rank (∼ 2), while the approximate rank in ALIEN is moderately high (∼ 5), and even higher in SEAQUEST (∼ 10). Consistent Interpretations As our approach is designed to exploit structures, we would expect to attribute the differences in performance across games to the "strength" of their structured properties. Games with strong low-rank structures tend to have larger improvements with SV-RL (Fig. 7 (a) and 7(b)), while moderate approximate rank tends to induce small improvements (Fig. 7(c) ), and high approximate rank may induce similar or slightly worse performances (Fig. 7(d) ). The empirical are well aligned with our arguments: if the Q-function for the task contains low-rank structure, SV-RL can exploit such structure for better efficiency and performance; if not, SV-RL may introduce slight or no improvements over the vanilla algorithms. As mentioned, the ME solutions balance being low rank and having small reconstruction error, which helps to ensure a reasonable or only slightly degraded performance, even for "high rank" games. We further observe consistent on ranks vs. improvement across different games and RL techniques in Appendix E, verifying our arguments. We plot games where the SV-based method performs differently. More structured games (with lower rank) can achieve better performance with SV-RL. Structures in Value Function Recent work in the control community starts to explore the structure of value function in control/planning tasks (; ; ; 2018). These work focuses on decomposing the value function and subsequently operating on the reduced-order space. In spirit, we also explore the low-rank structure in value function. The central difference is that instead of decomposition, we focus on "completion". We seek to efficiently operate on the original space by looking at few elements and then leveraging the structure to infer the rest, which allows us to extend our approach to modern deep RL. In addition, while there are few attempts for basis function learning in high dimensional RL , functions are hard to generate in many cases and approaches based on basis functions typically do not get the same performance as DQN, and do not generalize well. In contrast, we provide a principled and systematic method, which can be applied to any framework that employs value-based methods or sub-modules. Value-based Deep RL Value-based methods are fundamental in deep RL, exemplified by DQN (; 2015). There has been a large body of literature on its variants, such as double DQN , dueling DQN , IQN and other techniques that improve exploration . Our approach focuses on general value-based RL methods. As long as the method has a similar model update step as in Q-learning, our approach can leverage the structure to help with the task. We empirically show that deep RL tasks that have structured value functions indeed benefit from our scheme. Matrix Estimation ME is the primary tool we leverage to exploit the low-rank structure in value functions. The techniques have been widely studied and applied to different domains (; ;), and recently even in robust deep learning . The field is relatively mature, with extensive algorithms and provable recovery guarantees for structured matrix . Because of the strong promise, we view ME as a principled reconstruction oracle to exploit the low-rank structures within matrices. We investigated the structures in value function, and proposed a complete framework to understand, validate, and leverage such structures in various tasks, from planning to deep reinforcement learning. The proposed SVP and SV-RL algorithms harness the strong low-rank structures in the Q function, showing consistent benefits for both planning tasks and value-based deep reinforcement learning techniques. Extensive experiments validated the significance of the proposed schemes, which can be easily embedded into existing planning and RL frameworks for further improvements. randomly sample a set Ω of observed entries from S × A, each with probability p 4: / * update the randomly selected state-action pairs * / 5: end for 8: / * reconstruct the Q matrix via matrix estimation * / 9: apply matrix completion to the observed values {Q(s, a)} (s,a)∈Ω to reconstruct Q (t+1): While based on classical value iteration, we remark that a theoretical analysis, even in the tabular case, is quite complex. Although the field of ME is somewhat mature, the analysis has been largely focused on the "one-shot" problem: recover a static data matrix given one incomplete observation. Under the iterative scenario considered here, standard assumptions are easily broken and the analysis warrants potentially new machinery. Furthermore, much of the effort in the ME community has been devoted to the Frobenius norm guarantees rather than the infinite norm as in value iteration. Non-trivial infinite norm bound has received less attention and often requires special techniques . Resolving the above burdens would be important future avenues in its own right for the ME community. Henceforth, this paper focuses on empirical analysis and more importantly, generalizing the framework successfully to modern deep RL contexts. As we will demonstrate, the consistent empirical benefits offer a sounding foundation for future analysis. Inverted Pendulum As stated earlier in Sec. 3.3, the goal is to balance the inverted pendulum on the upright equilibrium position. The physical dynamics of the system is described by the angle and the angular speed, i.e., (θ,θ). Denote τ as the time interval between decisions, u as the torque input on the pendulum, the dynamics can be written as : A reward function that penalizes control effort while favoring an upright pendulum is used: In the simulation, the state space is (−π, π] for θ and [−10, 10] forθ. We limit the input torque in [−1, 1] and set τ = 0.3. We discretize each dimension of the state space into 50 values, and action space into 1000 values, which forms an Q-value function a matrix of dimension 2500 × 1000. We follow to handle the policy of continuous states by modelling their transitions using multi-linear interpolation. Mountain Car We select another classical control problem, i.e., the Mountain Car , for further evaluations. In this problem, an under-powered car aims to drive up a steep hill . The physical dynamics of the system is described by the position and the velocity, i.e., (x,ẋ). Denote u as the acceleration input on the car, the dynamics can be written as The reward function is defined to encourage the car to get onto the top of the mountain at x 0 = 0.5: We follow standard settings to restrict the state space as [−0.07, 0.07] for x and [−1.2, 0.6] forẋ, and limit the input u ∈ [−1, 1]. Similarly, the whole state space is discretized into 2500 values, and the action space is discretized into 1000 values. The evaluation metric we are concerned about is the total time it takes to reach the top of the mountain, given a randomly and uniformly generated initial state. We consider the Double Integrator system , as another classical control problem for evaluation. In this problem, a unit mass brick moving along the x-axis on a frictionless surface, with a control input which provides a horizontal force, u . The task is to design a control system to regulate this brick to x = T. The physical dynamics of the system is described by the position and the velocity (i.e., (x,ẋ)), and can be derived as , we use the quadratic cost formulation to define the reward function, which regulates the brick to We follow standard settings to restrict the state space as [−3, 3] for both x andẋ, limit the input u ∈ [−1, 1] and set τ = 0.1. The whole state space is discretized into 2500 values, and the action space is discretized into 1000 values. Similarly, we define the evaluation metric as the total time it takes to reach to x = T, given a randomly and uniformly generated initial state. Cart-Pole Finally, we choose the Cart-Pole problem , a harder control problem with 4-dimensional state space. The problem consists a pole attached to a cart moving on a frictionless track. The cart can be controlled by means of a limited force within 10N that is possible to apply both to the left or to the right of the cart. The goal is to keep the pole on the upright equilibrium position. The physical dynamics of the system is described by the angle and the angular speed of the pole, and the position and the speed of the cart, i.e., (θ,θ, x,ẋ). Denote τ as the time interval between decisions, u as the force input on the cart, the dynamics can be written as x:= u + ml θ2 sin θ −θ cos θ where g = 9.8m/s 2 corresponds to the gravity acceleration, m c = 1kg denotes the mass of the cart, m = 0.1kg denotes the mass of the pole, l = 0.5m is half of the pole length, and u corresponds to the force applied to the cart, which is limited by u ∈ [−10, 10]. A reward function that favors the pole in an upright position, i.e., characterized by keeping the pole in vertical position between |θ| ≤ 12π 180, is expressed as r(θ) = cos 4 (15θ). In the simulation, the state space is [− .5] foṙ x. We limit the input force in [−10, 10] and set τ = 0.1. We discretize each dimension of the state space into 10 values, and action space into 1000 values, which forms an Q-value function a matrix of dimension 10000 × 1000. We further verify that the optimal Q * indeed contains the desired low-rank structure. To this end, we construct "low-rank" policies directly from the converged Q matrix. In particular, for the converged Q matrix, we sub-sample a certain percentage of its entries, reconstruct the whole matrix via ME, and finally construct a corresponding policy. Fig. 8 illustrates the , where the policy heatmap as well as the performance (i.e., the angular error) of the "low-rank" policy is essentially identical to the optimal one. The reveal the intrinsic strong low-rank structures lie in the Q-value function. We provide additional for the inverted pendulum problem. We show the policy trajectory (i.e., how the angle of the pendulum changes with time) and the input changes (i.e., how the input torque changes with time), for each policy. In Fig. 9, we first show the comparison between the optimal policy and a "low-rank" policy. Recall that the low-rank policies are directly reconstructed from the converged Q matrix, with limited observation of a certain percentage of the entries in the converged Q matrix. As shown, the "low-rank" policy performs nearly identical to the optimal one, in terms of both the policy trajectory and the input torque changes. This again verifies the strong low-rank structure lies in the Q function. Further, we show the policy trajectory and the input torque changes of the SVP policy. We vary the percentage of observed data for SVP, and present the policies with 20% and 60% for demonstration. As reported in Fig. 10, the SVP policies are essentially identical to the optimal one. Interestingly, when we further decrease the observing percentage to 20%, the policy trajectory vibrates a little bit, but can still stabilize in the upright position with a small average angular deviation ≤ 5 •. Similarly, we first verify the optimal Q * contains the desired low-rank structure. We use the same approach to generate a "low-rank" policy based on the converged optimal value function. Fig. 11 (a) and 11(b) show the policy heatmaps, where the reconstructed "low-rank" policy maintains visually identical to the optimal one. In Fig. 11(c) and 12, we quantitatively show the average time-to-goal, the policy trajectory and the input changes between the two schemes. Compared to the optimal one, even with limited sampled data, the reconstructed policy can maintain almost identical performance. We further show the of the SVP policy with different amount of observed data (i.e., 20% and 60%) in Fig. 13 and 14. Again, the SVP policies show consistently comparable to the optimal policy, over various evaluation metrics. Interestingly, the converged Q matrix of vanilla value iteration is found to have an approximate rank of 4 (the whole matrix is 2500 × 1000), thus the SVP can harness such strong low-rank structure for perfect recovery even with only 20% observed data. For the Double Integrator, We first use the same approach to generate a "low-rank" policy. Fig. 15(a) and 15(b) show that the reconstructed "low-rank" policy is visually identical to the optimal one. In Further, we show the of the SVP policy with different amount of observed data (i.e., 20% and 60%) in Fig. 17 and 18. As shown, the SVP policies show consistently decent , which demonstrates that SVP can harness such strong low-rank structure even with only 20% observed data. Finally, we evaluate SVP on the Cart-Pole system. Note that since the state space has a dimension of 4, the policy heatmap should contain 4 dims, but is hard to visualize. Since the metric we care is the angle deviation, we here only plot the first two dims (i.e., the (θ,θ) tuple) with fixed x andẋ, to visualize the policy heatmaps. We first use the same approach to generate a "low-rank" policy. Fig. 19(a) and 19(b) show the policy heatmaps, where the reconstructed "low-rank" policy is visually identical to the optimal one. In Fig. 19(c) and 20, we quantitatively show the average time-to-goal, the policy trajectory and the input changes between the two schemes. As demonstrated, the reconstructed policy can maintain almost identical performance with only small amount of sampled data. We finally show the of the SVP policy with different amount of observed data (i.e., 20% and 60%) in Fig. 21 and 22. Even for harder control tasks with higher dimensional state space, the SVP policies are still essentially identical to the optimal one. Across various stochastic control tasks, we demonstrate that SVP can consistently leverage strong low-rank structures for efficient planning. Training Details and Hyper-parameters The network architectures of DQN and dueling DQN used in our experiment are exactly the same as in the original papers (; ;). We train the network using the Adam optimizer . In all experiments, we set the hyper-parameters as follows: learning rate α = 1e −5, discount coefficient γ = 0.99, and a minibatch size of 32. The number of steps between target network updates is set to 10, 000. We use a simple exploration policy as the -greedy policy with the decreasing linearly from 1 to 0.01 over 3e 5 steps. For each experiment, we perform at least 3 independent runs and report the averaged . † formed by the current batch of states, we mainly employ the Soft-Impute algorithm throughout our experiments. We set the sub-sample rate to p = 0.9 of the Q matrix, and use a linear scheduler to increase the sampling rate every 2e 6 steps. Experiments across Various Value-based RL We show more across DQN, double DQN and dueling DQN in Fig. 23, 24, 25, 26, 27 and 28, respectively. For DQN, we complete all 57 Atari games using the proposed SV-RL, and verify that the majority of tasks contain low-rank structures (43 out of 57), where we can obtain consistent benefits from SV-RL. For each experiment, we associate the performance on the Atari game with its approximate rank. As mentioned in the main text, majority of the games benefits consistently from SV-RL. We note that roughly only 4 games, which have a significantly large rank, perform slightly worse than the vanilla DQN. Consistency and Interpretation Across all the experiments we have done, we observe that when the game possesses structures (i.e., being approximately low-rank), SV-RL can consistently improve the performance of various value-based RL techniques. The superior performance is maintained through most of the experiments, verifying the ability of the proposed SV-RL to harness the structures for better efficiency and performance in value-based deep RL tasks. In the meantime, when the approximate rank is relatively higher (e.g., SEAQUEST), the performance of SV-RL can be similar or worse than the vanilla scheme, which also aligns well with our intuitions. Note that the majority of the games have an action space of size 18 (i.e., rank is at most 18), while some (e.g., PONG) only have 6 or less (i.e., rank is at most 6). We provide an additional study on the Inverted Pendulum problem with respect to the discretization scale. As described in Sec. 3, the dynamics is described by the angle and the angular speed as s = (θ,θ), and the action a is the torque applied. To solve the task with value iteration, the state and action spaces need to be discretized into fine-grids. Previously, the two-dimensional state space was discretized into 50 equally spaced points for each dimension and the one-dimensional action space was evenly discretized into 1000 actions, leading to a 2500 × 1000 Q-value matrix. Here we choose three different discretization values for state-action pairs: 400 × 100, 2500 × 1000, and 10000 × 4000, to provide different orders of discretization for both state and action values. As Table 1 reports, the approximate rank is consistently low when discretization varies, demonstrating the intrinsic low-rank property of the task. Fig. 29 and Table 1 further demonstrates the effectiveness of SVP: it can achieve almost the same policy as the optimal one even with only 20% observations. The reveal that as long as the discretization is fine enough to represent the optimal policy for the task, we would expect the final Q matrix after value iteration to have similar rank. Figure 29: Additional study on discretization scale. We choose three different discretization value on the Inverted Pendulum task, i.e. 400 (states, 20 each dimension) × 100 (actions), 2500 (states, 50 each dimension) × 1000 (actions), and 10000 (states, 100 each dimension) × 4000 (actions). First row reports the optimal policy, second row reports the SVP policy with 20% observation probability. Table 1: Additional study on discretization scale (cont.). We report the approximate rank as well as the performance metric (i.e., the average angular deviation) on different discretization scales. To further understand our approach, we provide another study on batch size for games of different rank properties. Two games from Fig. 7 are investigated; one with a small rank (Frostbite) and one with a high rank (Seaquest). Different batch sizes, 32, 64, and 128, are explored and we show the in Fig. 30. Intuitively, for a learning task, the more complex the learning task is, the more data it would need to fully learn the characteristics. For a complex game with higher rank, a small batch size may not be sufficient to capture the game, leading the recovered matrix via ME to impose a structure that deviates from the original, more complex structure of the game. In contrast, with more data, i.e., a larger batch size, the ME oracle attempts to find the best rank structure that would effectively describe the rich observations and at the same time, balance the reconstruction error. Such a structure is more likely to be aligned with the underlying complex task. Indeed, this is what we observe in Fig. 30. As expected, for Seaquest (high rank), the performance is worse than the vanilla DQN when the batch size is small. However, as the batch size increases, the performance gap becomes smaller, and eventually, the performance of SV-RL is the same when the batch size becomes 128. On the other hand, for games with low rank, one would expect that a small batch size would be enough to explore the underlying structure. Of course, a large batch size would not hurt since the game is intrinsically low-rank. In other words, our intuition would suggest SV-RL to perform better across different batch sizes. Again, we observe this phenomenon as expected in Fig. 30. For Frostbite (low rank), under different batch sizes, vanilla DQN with SV-RL consistently outperforms vanilla DQN by a certain margin. Figure 30: Additional study on batch size. We select two games for illustration, one with a small rank (Frostbite) and one with a high rank (Seaquest). We vary the batch size with 32, 64, and 128, and report the performance with and without SV-RL. | We propose a generic framework that allows for exploiting the low-rank structure in both planning and deep reinforcement learning. | 630 | scitldr |
Learned representations of source code enable various software developer tools, e.g., to detect bugs or to predict program properties. At the core of code representations often are word embeddings of identifier names in source code, because identifiers account for the majority of source code vocabulary and convey important semantic information. Unfortunately, there currently is no generally accepted way of evaluating the quality of word embeddings of identifiers, and current evaluations are biased toward specific downstream tasks. This paper presents IdBench, the first benchmark for evaluating to what extent word embeddings of identifiers represent semantic relatedness and similarity. The benchmark is based on thousands of ratings gathered by surveying 500 software developers. We use IdBench to evaluate state-of-the-art embedding techniques proposed for natural language, an embedding technique specifically designed for source code, and lexical string distance functions, as these are often used in current developer tools. Our show that the effectiveness of embeddings varies significantly across different embedding techniques and that the best available embeddings successfully represent semantic relatedness. On the downside, no existing embedding provides a satisfactory representation of semantic similarities, e.g., because embeddings consider identifiers with opposing meanings as similar, which may lead to fatal mistakes in downstream developer tools. IdBench provides a gold standard to guide the development of novel embeddings that address the current limitations. Reasoning about source code based on learned representations has various applications, such as predicting method names , detecting bugs and vulnerabilities , predicting types , detecting similar code , inferring specifications , code de-obfuscation (; a), and program repair . Many of these techniques are based on embeddings of source code, which map a given piece of code into a continuous vector representation that encodes some aspect of the semantics of the code. A core component of most code embeddings are semantic representations of identifier names, i.e., the names of variables, functions, classes, fields, etc. in source code. Similar to words in natural languages, identifiers are the basic building block of source code. Identifiers not only account for the majority of the vocabulary of source code, but they also convey important information about the (intended) meaning of code. To reason about identifiers and their meaning, code analysis techniques build on learned embeddings of identifiers, either by adapting embeddings that were originally proposed for natural languages (a; or with embeddings specifically designed for source code (a). Given the importance of identifier embeddings, a crucial challenge is measuring how effective an embedding represents the semantic relationships between identifiers. For word embeddings in natural language, the community has addressed this question through a series of gold standards (; a; ; ; ;). These gold standards define how similar two words are based on ratings by human judges, enabling an evaluation that measures how well an embedding reflects the human ratings. Unfortunately, simply reusing existing gold standards to identifiers in source code would be misleading. One reason is that the vocabularies of natural languages and source code overlap only partially, because source code contains various terms and abbreviations not found in natural language texts. Moreover, source code has a constantly growing vocabulary, as developers tend to invent new identifiers, e.g., for newly emerging application domains. Finally, even words present in both natural languages and source code may differ in their meaning due to computer science-specific meanings of some words, e.g., "float" or "string". This paper addresses the problem of measuring and comparing the effectiveness of embeddings of identifiers. We present IdBench, a benchmark for evaluating techniques that represent semantic similarities of identifiers. The basis of the benchmark is a dataset of developer opinions about the similarity of pairs of identifiers. We gather this dataset through two surveys that show realworld identifiers and code snippets to hundreds of developers, asking them to rate their similarity. Taking the developer opinions as a gold standard, IdBench allows for evaluating embeddings in a systematic way by measuring to what extent an embedding agrees with ratings given by developers. Moreover, inspecting pairs of identifiers for which an embedding strongly agrees or disagrees with the benchmark helps understand the strengths and weaknesses of current embeddings. Overall, we gather thousands of ratings from 500 developers. Cleaning and compiling this raw dataset into a benchmark yields several hundreds of pairs of identifiers with gold standard similarities, including identifiers from a wide range of application domains. We apply our approach to a corpus of JavaScript code, because several recent pieces of work on identifier names and code embeddings focus on this language (; b; a;). Applying our methodology to another language is straightforward. Based on the newly created benchmark, we evaluate and compare state-of-the-art embeddings of identifiers. We find that different embedding techniques differ heavily in terms of their ability to accurately represent identifier relatedness and similarity. The best available technique, the CBOW variant of FastText, accurately represents relatedness, but none of the available techniques accurately represents identifier similarities. One reason is that some embeddings are confused about identifiers with opposite meaning, e.g., rows and cols, and about identifiers that belong to the same application domain but are not similar. Another reason is that some embeddings miss synonyms, e.g., file and record. We also find that simple string distance functions, which measure the similarity of identifiers without any learning, are surprisingly effective, and even outperform some learned embeddings for the similarity task. In summary, this paper makes the following contributions. Methodology: To the best of our knowledge, we are the first to systematically evaluate embeddings of identifiers. Our methodology is based on surveying developers and summarizing their opinions into gold standard similarities of pairs of identifiers. Reusable benchmark: We make available a benchmark of hundreds of pairs of identifiers, providing a way to systematically evaluate existing and future embeddings. Comparison of state-of-the-art embeddings: We evaluate seven existing embeddings and string similarity functions, and discuss their strengths and weaknesses. We design IdBench so that it distinguishes two kinds of semantic relationships between identifiers (; ;). On the one hand, relatedness refers to the degree of association between two identifiers and covers various possible relations between them. For example, top and bottom are related because they are opposites, click and dblclick are related because they belong to the same general concept, and getBorderWidth and getPadding are related because they belong to the same application domain. On the other hand, similarity refers to the degree to which two identifiers have the same meaning, in the sense that one could substitute the other without changing the overall meaning. For example, length and size, as well as username and userid, are similar to each other. Similarity is a stronger semantic relationship than relatedness, because the former implies the latter, but not vice-versa. For example, the identifiers start and end are related, as they are opposites, IdBench includes three benchmark tasks: A relatedness task and two task to measure how well an embedding reflects the similarity of identifiers: a similarity task and a contextual similarity task. The following describes how we gather developer opinions that provide data for these tasks. Direct Survey of Developer Opinions This survey shows two identifiers to a developer and then directly asks how related and how similar the identifiers are. Figure 1a shows an example question from the survey. The developer is shown pairs of identifiers and is then asked to rate on a five-point Likert scale how related and how similar these identifiers are to each other. In total, each developer is shown 18 pairs of identifiers, which we randomly sample from a larger pool of pairs. Before showing the questions, we provide a brief description of what the developers are supposed to do, including an explanation of the terms "related" and "substitutable". The ratings gathered in the direct survey are the basis for the relatedness task and the similarity task of IdBench. This survey asks developers to pick an identifier that best fits a given code context, which indirectly asks about the similarity of identifiers The motivation is that identifier names alone may not provide enough information to fully judge how similar they are . For example, without any context, identifiers idx and hl may cause confusion for developers who are trying to judge their similarity. The survey addresses this challenge by showing the code context in which an identifier occurs, and by asking the developers to decide which of two given identifiers best fits this context. If, for a specific pair of identifiers, the developers choose both identifiers equally often, then the identifiers are likely to be similar to each other, since one can substitute the other. Figure 1b shows a question asked during the indirect survey. As shown in the example, for code contexts where the identifier occurs multiple times, we show multiple blanks that all refer to the same identifier. In total, we show 15 such questions to each participant of the survey. The ratings gathered in the indirect survey are the basis for the contextual similarity tasks of IdBench. We select identifiers and code contexts from a corpus of 50,000 JavaScript files . We select 300 pairs, made out of 488 identifiers, through a combination of automated and manual selection, aimed at a diverse set that covers different degrees of similarity and relatedness. The selection is guided by similarities between identifiers as judged by word embeddings Bruni et al. (2014b). The first step is to extract from the code corpus all identifier names that appear more than 100 times, which in about 17,000 identifiers, including method names, variable names, property names, other types of identifiers. Next, we compute word embeddings of the extracted identifiers, compute the cosine distance of all pairs of identifiers, and then sort the pairs according to their cosine distances. We use two word embeddings Mikolov et al. (2013a); Alon et al. (2018a), giving two sorted lists of pairs, and sample 150 pairs from each list as follows: (i) Pick 75 of the first 1,000 pairs, i.e., the most similar pairs according to the respective embedding, where 38 pairs are randomly sampled and 37 pairs are manually selected. We manually select some pairs because otherwise, we had observed a lack of pairs of synonymous identifiers. (ii) Randomly sample 50 from pairs 1,001 to 10,000, i.e., pairs that still have a relatively high similarity. (iii) Randomly sample 25 pairs from the remaining pairs, i.e., pairs that are mostly dissimilar to each other. To gather the code contexts for the indirect survey, we search the code corpus for occurrences of the selected identifiers. As the size of the context, we choose five lines, which provides sufficient context to choose the best identifier without overwhelming developers with large amounts of code. For each identifier, we randomly select five different contexts. When showing a specific pair of identifiers to a developer, we randomly select one of the gathered contexts for one of the two identifiers. Participants We pay developers via Amazon Mechanical Turk to perform the surveys. Participants take around 15 minutes to complete both surveys. In total, 500 developers participate in the survey, which yields at least 10 ratings for each pair of identifiers. To eliminate noise in the gathered data, e.g., due to lack of expertise or involvement by the participants, we clean the data by removing some participants and identifier pairs. Removing Outliers As a first filter, we remove outlier participants based on the Inter-Rater Agreement (IRA), which measures the degree of agreement between participants. We use Krippendorf's alpha coefficient, because it handles unequal sample sizes, which fits our data, since not all participants rate the same pairs, and because not all pairs have the same number of ratings. The coefficient ranges between zero and one, where zero represents complete disagreement and one represents perfect agreement. For each participant, we calculate the difference between her rating and the average of all the other ratings for each pair. Then, we average these differences for each rater, and discard participants with a difference above a threshold. We perform this computation both for the relatedness and similarity ratings from the direct survey, and then remove outliers based on the average difference across both ratings. Removing Downers As a second filter, we eliminate participants that decrease the overall IRA, called downers , because they bring the agreement level between all participants down. For each participant p, we compute IRA sim and IRA rel after removing the ratings of p from the data. If IRA sim or IRA rel increases by at least 10%, then we discard that participant's ratings. All IRA-based filtering of participants is based on ratings from the direct survey only. For the indirect survey, computing the IRA would be inadequate because the code contexts shown along with a pair of identifiers are randomly sampled, i.e., a pair is rarely shown with the same context. Instead, we exploit the fact that the two surveys are shown in sequence to each participant. Specifically, we discard participants based on the first two filters for both the direct and indirect survey, assuming that participants discarded in the direct survey are not worth keeping in the indirect survey. Removing Pairs with Confusing Contexts As a third filter, we eliminate some pairs of identifiers used in the indirect survey. Since our random selection of code contexts may include contexts that are not helpful in deciding on the most suitable identifier, the ratings for some pairs are rather arbitrary. To mitigate this problem, we remove a pair if the difference in similarity as rated in the direct and indirect surveys exceeds some threshold. Table 2 shows the number of identifier pairs that remain in the benchmark after data cleaning. For each of the three tasks, we provide a small, medium, and large benchmark, which differ by the thresholds used during data cleaning. The smaller benchmarks use stricter thresholds and hence provide higher agreements between the participants, whereas the larger benchmarks offer more pairs. Converting Ratings to Scores To ease the evaluation of embeddings, we convert the ratings gathered for a specific pair during the developer surveys into a similarity score in the range. For the direct survey, we scale the 5-point Likert-scale ratings into the range and average all ratings for . This conversion yields an unbounded distance measure d for each pair, which we convert into a similarity score s by normalizing and inverting the distance:, where min d and max d are the minimum and maximum distances across all pairs. Examples Table 3 shows representative examples of identifier pairs and their scores for the three benchmark tasks. The example illustrates that the scores match human intuition and that the gold standard clearly distinguishes relatedness from similarity. Some of the highly related and highly similar pairs, e.g., substr and substring, are lexically similar, while others are synonyms, e.g., count and total. While identifiers like rows and columns are strongly related, one cannot substitute the other, and they hence have low similarity. Similarly miny, ypos represent distinct properties of the variable y. Finally, some pairs are either weakly or not at all related, e.g., re and destruct. Embeddings and String Distance Functions To assess how well existing embedding techniques represent the relatedness and similarity of identifiers, we evaluate five vector representations against IdBench. We evaluate (i) the continuous bag of words and the skip-gram variants of Word2vec (a ;b) ("w2v-cbow" and "w2v-sg"), because recent identifier-based code analysis tools, e.g., DeepBugs and NL2Type use it, (ii) FastText , a sub-word extension of Word2vec that represents words as character n-grams ("FT-cbow" and "FT-sg"), and (iii) an embedding technique specifically designed for code, which learns from paths through a structural, tree-based representation of code (a) ("path-based"). We train all embeddings on the same code corpus of 50,000 JavaScript files. For each embedding, we experiment with various hyper-parameters (e.g., dimension, number of context words) and report only for the best performing models. In addition to neural embeddings of identifiers, we also evaluate two string distance functions: Levenshtein's edit distance and Needleman-Wunsch distance . These functions use lexical similarity as a proxy for the semantic relatedness of identifiers. We consider these functions because they are used in identifier-based code analysis tools, including a bug detection tool deployed at Google . Measuring Agreement with the Benchmark We measure the magnitude of agreement of an embedding with IdBench by computing Spearman's rank correlation ρ between the cosine similarities of pairs of identifier vectors and the gold standard of similarity scores. For string similarity functions, we compute the similarity score s = 1 − d norm for each pair based on a normalization d norm of the distance returned by the function. Figure 4 shows the agreement of the evaluated techniques with the small, medium, and large variants of IdBench. All embeddings and string distance functions agree to some extent with the gold standard. Overall, FastText-cbow consistently outperforms all other techniques, both for relatedness and similarity. We discuss the in more detail in the following. Figure 4a, all techniques achieve relatively high levels of agreement, with correlations between 46% and 73%. The neurally learned embeddings clearly outperform the string distance-based similarity functions (53-73% vs. 46-48%), showing that the effort of learning a semantic representation is worthwhile. In particular, the learned embeddings match or even slightly exceed the IRA, which is sometimes considered an upper bound of how strongly an embedding may correlate with a similarity-based benchmark . Comparing different embedding techniques with each other, we find that both FastText variants achieve higher scores than all other embeddings. In contrast, despite using additional structural information of source code, path-based embeddings score only comparably to Word2vec. Similarity Figure 4b shows a much lower agreement with the gold standard for similarity than for relatedness. One explanation is that encoding semantic similarity is a harder task than encoding the less strict notion of relatedness. Similar to relatedness, FastText-cbow shows the strongest agreement, ranging between 37% and 39%. A perhaps surprising is string distance functions outperforming some of the embeddings. Contextual similarity The of the contextual similarity task (Figure 4c), confirm the findings from the similarity task. All studied techniques are less effective than for relatedness, and FastText-cbow achieves the highest agreement with IdBench. One difference between the for similarity and contextual similarity is that we here observe higher scores by path-based embeddings. We attribute this to the ability of the path-based model to represent code snippets in a vector space. While the best available embeddings are highly effective at representing relatedness, none of the studied techniques reaches the same level of agreement for similarity. In fact, even the best in Figures 4b and 4c (39%) clearly stay beyond the IRA of our benchmark (62%), showing a huge potential for improvement. For many applications of embeddings of identifiers, semantic similarity is crucial. For example, tools to suggest suitable variable or method names (; a) aim for the name that is most similar, not only most related, to the concept represented by the variable or method. Likewise, identifier name-based tools for finding programming errors or variable misuses want to identify situations where the developer uses a wrong, but perhaps related, variable. The lack of embeddings that accurately represent the semantic similarities of identifiers motivates more work on embedding techniques suitable for this task. To better understand why current embeddings sometimes fail to accurately represent similarities, Table 1 shows the most similar identifiers of selected identifiers according to the FastText-cbow and path-based embeddings. The examples illustrate two observations. First, FastText, due to its use of n-grams , tends to cluster identifiers based on lexical commonalities. While many lexically similar identifiers are also semantically similar, e.g., substr and substring, this approach misses other synonyms, e.g., item and entry. Another downside is that lexical similarity may also establish wrong relationships. For example, substring and substrCount represent different concepts, but FastText finds them to be highly similar. Second, in contrast to FastText, path-based embeddings tend to cluster words based on their structural and syntactical contexts. This approach helps the embeddings to identify synonyms despite their lexical differences, e.g., count and total, or files and records. The downside is that it also clusters various related but not similar identifiers, e.g., minText and maxText, or substr and getPadding. Some of these identifiers even have opposing meanings, e.g., rows and cols, which can mislead code analysis tools when reasoning about the semantics of code. A somewhat surprising is that simple string distance functions achieve a level of agreement with IdBench's similarity gold standards as high as some learned embeddings. The reason why string distance functions sometimes correctly identify semantic similarities is that some semantically similar identifiers are also be lexically similar. One downside of lexical approaches is that they miss synonymous identifiers, e.g., count and total. The wide use of word embeddings in NLP raises the question of how to compare and evaluate word embeddings. Several gold standards based on human judgments have been proposed, focusing on either relatedness (; a;) or similarity (; ; ;) of words. While these existing gold standards for NL words have been extremely inspiring, they are insufficient to evaluate embeddings of identifiers. One reason is that the vocabulary of source code contains various words not found in standard NL vocabulary, e.g., abbreviations and domain-specific terms, leading to very large vocabularies . Moreover, even identifiers found also in NL may have a different meaning in source code, e.g., float or string. This work is the first to address the need for a gold standard for identifiers. Data gathering Asking human raters how related or similar two words are was first proposed by and then adopted by others (; ; ;). Our direct survey also follows this methodology. propose to gather judgments about contextual similarity by asking participants to choose a word to fill in a blank, an idea we adopt in our indirect survey. To choose words and pairs of words, prior work relies on manual selection , preexisting free association databases , e.g., USF or VerbNet (;, or cosine similarities according to pre-existing models (a; . We follow the latter approach, as it minimizes human bias while covering a wide range of degrees of relatedness and similarity. Inter-rater agreement Gold standards for NL words reach an IRA of 0.61 and 0.67 . Our "small" dataset reaches similar levels of agreement, showing that the rates in IdBench represent a genuine human intuition. As noted by , the IRA also gives an upper bound of the expected correlation between the tested model and the gold standard. Our show that current models still leave plenty of room for improvement, especially w.r.t. similarity. Embeddings of identifiers Embeddings of identifiers are at the core of several code analysis tools. A popular approach, e.g., for bug detection , type prediction , or vulnerability detection , is applying Word2vec (a; to token sequences, which corresponds to the Word2vec embedding evaluated in Section 3. train an RNN-based language model and extract its final hidden layer as an embedding of identifiers. provide a more comprehensive survey of embeddings for source code. Beyond learned embeddings, string similarity functions are used in other name-based tools, e.g., for detecting bugs or for inferring specifications . The quality of embeddings is crucial in these and other code analysis tools, and IdBench will help to improve the state of the art. Embeddings of programs Beyond embeddings of identifiers, there is a stream of work on embedding larger parts of a program. use a log-bilinear, neural language model to predict the names of methods. Other work embeds code based on graph neural networks or sequence-based neural networks applied to paths through a graph representation of code (; b; ; ; ;). Code2seq embeds code and then generates sequences of NL words (a). gives a detailed survey of learned models of code. To evaluate embeddings of programs, propose the COSET benchmark that provide thousands of programs with semantic labels. Ours and their work complement each other, as COSET evaluates embeddings of entire programs, whereas IdBench evaluates embeddings of identifiers. Since identifiers are a basic building block of source code, a benchmark for improving embeddings of identifiers will eventually also benefit learning-based code analysis tools. This paper presents the first benchmark for evaluating vector space embeddings of identifiers names, which are at the core of many machine learning models of source code. We compile thousands of ratings gathered from 500 developers into three benchmarks that provide gold standard similarity scores representing the relatedness, similarity, and contextual similarity of identifiers. Using IdBench to experimentally compare five embedding techniques and two string distance functions shows that these techniques differ significantly in their agreement with our gold standard. The best available embedding is very effective at representing how related identifiers are. However, all studied techniques show huge room for improvement in their ability to represent how similar identifiers are. IdBench will help steer future efforts on improved embeddings of identifiers, which will eventually enable better machine learning models of source code. | A benchmark to evaluate neural embeddings of identifiers in source code. | 631 | scitldr |
Generative adversarial nets (GANs) are widely used to learn the data sampling process and their performance may heavily depend on the loss functions, given a limited computational budget. This study revisits MMD-GAN that uses the maximum mean discrepancy (MMD) as the loss function for GAN and makes two contributions. First, we argue that the existing MMD loss function may discourage the learning of fine details in data as it attempts to contract the discriminator outputs of real data. To address this issue, we propose a repulsive loss function to actively learn the difference among the real data by simply rearranging the terms in MMD. Second, inspired by the hinge loss, we propose a bounded Gaussian kernel to stabilize the training of MMD-GAN with the repulsive loss function. The proposed methods are applied to the unsupervised image generation tasks on CIFAR-10, STL-10, CelebA, and LSUN bedroom datasets. Results show that the repulsive loss function significantly improves over the MMD loss at no additional computational cost and outperforms other representative loss functions. The proposed methods achieve an FID score of 16.21 on the CIFAR-10 dataset using a single DCGAN network and spectral normalization. Generative adversarial nets (GANs) BID7 ) are a branch of generative models that learns to mimic the real data generating process. GANs have been intensively studied in recent years, with a variety of successful applications (; Li et al. (2017b);;; BID13 ). The idea of GANs is to jointly train a generator network that attempts to produce artificial samples, and a discriminator network or critic that distinguishes the generated samples from the real ones. Compared to maximum likelihood based methods, GANs tend to produce samples with sharper and more vivid details but require more efforts to train. Recent studies on improving GAN training have mainly focused on designing loss functions, network architectures and training procedures. The loss function, or simply loss, defines quantitatively the difference of discriminator outputs between real and generated samples. The gradients of loss functions are used to train the generator and discriminator. This study focuses on a loss function called maximum mean discrepancy (MMD), which is well known as the distance metric between two probability distributions and widely applied in kernel two-sample test BID8 ). Theoretically, MMD reaches its global minimum zero if and only if the two distributions are equal. Thus, MMD has been applied to compare the generated samples to real ones directly (; BID5) and extended as the loss function to the GAN framework recently (; Li et al. (2017a); ).In this paper, we interpret the optimization of MMD loss by the discriminator as a combination of attraction and repulsion processes, similar to that of linear discriminant analysis. We argue that the existing MMD loss may discourage the learning of fine details in data, as the discriminator attempts to minimize the within-group variance of its outputs for the real data. To address this issue, we propose a repulsive loss for the discriminator that explicitly explores the differences among real data. The proposed loss achieved significant improvements over the MMD loss on image generation tasks of four benchmark datasets, without incurring any additional computational cost. Furthermore, a bounded Gaussian kernel is proposed to stabilize the training of discriminator. As such, using a single kernel in MMD-GAN is sufficient, in contrast to a linear combination of kernels used in Li et al. (2017a) and. By using a single kernel, the computational cost of the MMD loss can potentially be reduced in a variety of applications. The paper is organized as follows. Section 2 reviews the GANs trained using the MMD loss (MMD-GAN). We propose the repulsive loss for discriminator in Section 3, introduce two practical techniques to stabilize the training process in Section 4, and present the of extensive experiments in Section 5. In the last section, we discuss the connections between our model and existing work. In this section, we introduce the GAN model and MMD loss. Consider a random variable X ∈ X with an empirical data distribution P X to be learned. A typical GAN model consists of two neural networks: a generator G and a discriminator D. The generator G maps a latent code z with a fixed distribution P Z (e.g., Gaussian) to the data space X: y = G(z) ∈ X, where y represents the generated samples with distribution P G. The discriminator D evaluates the scores D(a) ∈ R d of a real or generated sample a. This study focuses on image generation tasks using convolutional neural networks (CNN) for both G and D.Several loss functions have been proposed to quantify the difference of the scores between real and generated samples: {D(x)} and {D(y)}, including the minimax loss and non-saturating loss BID7 ), hinge loss , Wasserstein loss; BID10 ) and maximum mean discrepancy (MMD) (Li et al. (2017a); ) (see Appendix B.1 for more details). Among them, MMD uses kernel embedding φ(a) = k(·, a) associated with a characteristic kernel k such that φ is infinite-dimensional and φ(a), φ(b) H = k(a, b). The squared MMD distance between two distributions P and Q is DISPLAYFORM0 The kernel k(a, b) measures the similarity between two samples a and b. BID8 proved that, using a characteristic kernel k, M,P G ) reaches its minimum if and only if P X = P G (Li et al. (2017a) ). Thus, the objective functions for G and D could be (Li et al. (2017a); ): min DISPLAYFORM1 ) MMD-GAN has been shown to be more effective than the model that directly uses MMD as the loss function for the generator G (Li et al. (2017a)). showed that MMD and Wasserstein metric are weaker objective functions for GAN than the Jensen-Shannon (JS) divergence (related to minimax loss) and total variation (TV) distance (related to hinge loss). The reason is that convergence of P G to P X in JS-divergence and TV distance also implies convergence in MMD and Wasserstein metric. Weak metrics are desirable as they provide more information on adjusting the model to fit the data distribution . proved that the GAN trained using the minimax loss and gradient updates on model parameters is locally exponentially stable near equilibrium, while the GAN using Wasserstein loss is not. In Appendix A, we demonstrate that the MMD-GAN trained by gradient descent is locally exponentially stable near equilibrium. In this section, we interpret the training of MMD-GAN (using L First, consider a linear discriminant analysis (LDA) model as the discriminator. The task is to find a projection w to maximize the between-group variance w T µ x − w T µ y and minimize the withingroup variance w T (Σ x + Σ y)w, where µ and Σ are group mean and covariance. In MMD-GAN, the neural-network discriminator works in a similar way as LDA. By minimizing L att D, the discriminator D tackles two tasks: DISPLAYFORM0, causes the two groups {D(x)} and {D(y)} to repel each other (see FIG1, or maximize betweengroup variance; and 2) D increases DISPLAYFORM1 e. contracts {D(x)} and {D(y)} within each group (see FIG1, or minimize the within-group variance. We refer to loss functions that contract real data scores as attractive losses. We argue that the attractive loss L att D (Eq. 3) has two issues that may slow down the GAN training:1. The discriminator D may focus more on the similarities among real samples (in order to contract {D(x)}) than the fine details that separate them. Initially, G produces low-quality samples and it may be adequate for D to learn the common features of {x} in order to distinguish between {x} and {y}. Only when {D(y)} is sufficiently close to {D(x)} will D learn the fine details of {x} to be able to separate {D(x)} from {D(y)}. Consequently, D may leave out some fine details in real samples, thus G has no access to them during training.2. As shown in FIG1, the gradients on D(y) from the attraction (blue arrows) and repulsion (orange arrows) terms in L att D (and thus L mmd G) may have opposite directions during training. Their summation may be small in magnitude even when D(y) is far away from D(x), which may cause G to stagnate locally. Therefore, we propose a repulsive loss for D to encourage repulsion of the real data scores {D(x)}: DISPLAYFORM2 The generator G uses the same MMD loss L mmd G as before (see Eq. 2). Thus, the adversary lies in the fact that D contracts {D(y)} via maximizing FIG1 ) while G expands {D(y)} (see FIG1). Additionally, D also learns to separate the real data by minimizing DISPLAYFORM3 DISPLAYFORM4, which actively explores the fine details in real samples and may in more meaningful gradients for G. Note that in Eq. 4, D does not explicitly push the average score of {D(y)} away from that of {D(x)} because it may have no effect on the pair-wise sample distances. But G aims to match the average scores of both groups. Thus, we believe, compared to the model using DISPLAYFORM5 and L rep D is less likely to yield opposite gradients when {D(y)} and {D(x)} are distinct (see FIG1). In Appendix A, we demonstrate that GAN trained using gradient descent and the repulsive MMD loss (L At last, we identify a general form of loss function for the discriminator D: DISPLAYFORM6 In this section, we propose two approaches to stabilize the training of MMD-GAN: 1) a bounded kernel to avoid the saturation issue caused by an over-confident discriminator; and 2) a generalized power iteration method to estimate the spectral norm of a convolutional kernel, which was used in spectral normalization on the discriminator in all experiments in this study unless specified otherwise. For MMD-GAN, the following two kernels have been used:• Gaussian radial basis function (RBF), or Gaussian kernel (Li et al. (2017a) DISPLAYFORM0 2 ) where σ > 0 is the kernel scale or bandwidth.• Rational quadratic kernel DISPLAYFORM1, where the kernel scale α > 0 corresponds to a mixture of Gaussian kernels with a Gamma(α, 1) prior on the inverse kernel scales σ −1.It is interesting that both studies used a linear combination of kernels with five different kernel scales, i.e., DISPLAYFORM2, where σ i ∈ {1, 2, 4, 8, 16}, α i ∈ {0.2, 0.5, 1, 2, 5} (see FIG0 and 2c for illustration). We suspect the reason is that a single kernel k(a, b) is saturated when the distance a − b is either too large or too small compared to the kernel scale (see FIG0 and 2d), which may cause diminishing gradients during training. Both Li et al. (2017a) and applied penalties on the discriminator parameters but not to the MMD loss itself. Thus the saturation issue may still exist. Using a linear combination of kernels with different kernel scales may alleviate this issue but not eradicate it. Inspired by the hinge loss (see Appendix B.1), we propose a bounded RBF (RBF-B) kernel for the discriminator. The idea is to prevent D from pushing {D(x)} too far away from {D(y)} and causing saturation. For L att D in Eq. 3, the RBF-B kernel is: DISPLAYFORM3 For L rep D in Eq. 4, the RBF-B kernel is: DISPLAYFORM4 where b l and b u are the lower and upper bounds. As such, a single kernel is sufficient and we set σ = 1, b l = 0.25 and b u = 4 in all experiments for simplicity and leave their tuning for future work. It should be noted that, like the case of hinge loss, the RBF-B kernel is used only for the discriminator to prevent it from being over-confident. The generator is always trained using the original RBF kernel, thus we retain the interpretation of MMD loss L mmd G as a metric. RBF-B kernel is among many methods to address the saturation issue and stabilize MMD-GAN training. We found random sampling kernel scale, instance noise (Sønderby et al. ) and label smoothing may also improve the model performance and stability. However, the computational cost of RBF-B kernel is relatively low. Without any Lipschitz constraints, the discriminator D may simply increase the magnitude of its outputs to minimize the discriminator loss, causing unstable training 3. Spectral normalization divides the weight matrix of each layer by its spectral norm, which imposes an upper bound on the magnitudes of outputs and gradients at each layer of D . However, to estimate the spectral norm of a convolution kernel, reshaped the kernel into a matrix. We propose a generalized power iteration method to directly estimate the spectral norm of a convolution kernel (see Appendix C for details) and applied spectral normalization to the discriminator in all experiments. In Appendix D.1, we explore using gradient penalty to impose the Lipschitz constraint BID10;; ) for the proposed repulsive loss. In this section, we empirically evaluate the proposed 1) repulsive loss L (Li et al. (2017a) ) and rational quadratic kernel (MMD-rq) ), as well as non-saturating loss BID7 ) and hinge loss . To show the efficacy of RBF-B kernel, we applied it to both L ). DISPLAYFORM0 Dataset: The loss functions were evaluated on four datasets: 1) CIFAR-10 (50K images, 32 × 32 pixels) ; 2) STL-10 (100K images, 48 × 48 pixels) BID3 ); 3) CelebA (about 203K images, 64 × 64 pixels) (Liu et al. FORMULA8); and 4) LSUN bedrooms (around 3 million images, 64×64 pixels) (Yu et al. FORMULA8). The images were scaled to range [−1, 1] to avoid numeric issues. FORMULA8 ) was used in the generator, and spectral normalization with the generalized power iteration (see Appendix C) in the discriminator. For MMD related losses, the dimension of discriminator output layer was set to 16; for non-saturating loss and hinge loss, it was 1. In Appendix D.2, we investigate the impact of discriminator output dimension on the performance of repulsive loss. ) and thus omitted.3 On LSUN-bedroom, MMD-rbf and MMD-rq did not achieve reasonable and thus are omitted. Hyper-parameters: We used Adam optimizer (Kingma & Ba FORMULA8) with momentum parameters β 1 = 0.5, β 2 = 0.999; two-timescale update rule (TTUR) BID12 ) with two learning rates (ρ D, ρ G) chosen from {1e-4, 2e-4, 5e-4, 1e-3} (16 combinations in total); and batch size 64. Fine-tuning on learning rates may improve the model performance, but constant learning rates were used for simplicity. All models were trained for 100K iterations on CIFAR-10, STL-10, CelebA and LSUN bedroom datasets, with n dis = 1, i.e., one discriminator update per generator update 4. For MMD-rbf, the kernel scales σ i ∈ {1, √ 2, 2, 2 √ 2, 4} were used due to a better performance than the original values used in Li et al. (2017a). For MMD-rq, α i ∈ {0.2, 0.5, 1, 2, 5}. For MMD-rbf-b, MMD-rep, MMD-rep-b, a single Gaussian kernel with σ = 1 was used. Evaluation metrics: Inception score (IS) , Fréchet Inception distance (FID) BID12 ) and multi-scale structural similarity (MS-SSIM) were used for quantitative evaluation. Both IS and FID are calculated using a pre-trained Inception model . Higher IS and lower FID scores indicate better image quality. MS-SSIM calculates the pair-wise image similarity and is used to detect mode collapses among images of the same class . Lower MS-SSIM values indicate perceptually more diverse images. For each model, 50K randomly generated samples and 50K real samples were used to calculate IS, FID and MS-SSIM. TAB0 shows the Inception score, FID and MS-SSIM of applying different loss functions on the benchmark datasets with the optimal learning rate combinations tested experimentally. Note that the same training setup (i.e., DCGAN + BN + SN + TTUR) was applied for each loss function. We observed that: 1) MMD-rep and MMD-rep-b performed significantly better than MMD-rbf and MMD-rbf-b respectively, showing the proposed repulsive loss L rep D (Eq. 4) greatly improved over the attractive loss L att D (Eq. 3); 2) Using a single kernel, MMD-rbf-b performed better than MMD-rbf and MMD-rq which used a linear combination of five kernels, indicating that the kernel saturation may be an issue that slows down MMD-GAN training; 3) MMD-rep-b performed comparable or better than MMD-rep on benchmark datasets where we found the RBF-B kernel managed to stabilize MMD-GAN training using repulsive loss. 4) MMD-rep and MMD-rep-b performed significantly better than the non-saturating and hinge losses, showing the efficacy of the proposed repulsive loss. Additionally, we trained MMD-GAN using the general loss L D,λ (Eq. 5) for discriminator and L mmd G (Eq. 2) for generator on the CIFAR-10 dataset. Each color bar represents the FID score using a learning rate combination (ρ D, ρ G), in the order of (1e-4, 1e-4), (1e-4, 2e-4),...,(1e-3, 1e-3). The discriminator was trained using L D,λ (Eq. 5) with λ ∈ {-1, -0.5, 0, 0.5, 1, 2}, and generator using L mmd G (Eq. 2). We use the FID> 30 to indicate that the model diverged or produced poor .of MMD-GAN with RBF and RBF-B kernel 5. Note that when λ = −1, the models are essentially MMD-rbf (with a single Gaussian kernel) and MMD-rbf-b when RBF and RBF-B kernel are used respectively. We observed that: 1) the model performed well using repulsive loss (i.e., λ ≥ 0), with λ = 0.5, 1 slightly better than λ = −0.5, 0, 2; 2) the MMD-rbf model can be significantly improved by simply increasing λ from −1 to −0.5, which reduces the attraction of discriminator on real sample scores; 3) larger λ may lead to more diverged models, possibly because the discriminator focuses more on expanding the real sample scores over adversarial learning; note when λ 1, the model would simply learn to expand all real sample scores and pull the generated sample scores to real samples', which is a divergent process; 4) the RBF-B kernel managed to stabilize MMD-rep for most diverged cases but may occasionally cause the FID score to rise up. The proposed methods were further evaluated in Appendix A, C and D. In Appendix A.2, we used a simulation study to show the local stability of MMD-rep trained by gradient descent, while its global stability is not guaranteed as bad initialization may lead to trivial solutions. The problem may be alleviated by adjusting the learning rate for generator. In Appendix C.3, we showed the proposed generalized power iteration (Section 4.2) imposes a stronger Lipschitz constraint than the method in , and benefited MMD-GAN training using the repulsive loss. Moreover, the RBF-B kernel managed to stabilize the MMD-GAN training for various configurations of the spectral normalization method. In Appendix D.1, we showed the gradient penalty can also be used with the repulsive loss. In Appendix D.2, we showed that it was better to use more than one neuron at the discriminator output layer for the repulsive loss. The discriminator outputs may be interpreted as a learned representation of the input samples. FIG8 visualizes the discriminator outputs learned by the MMD-rbf and proposed MMD-rep methods on CIFAR-10 dataset using t-SNE (van der Maaten FORMULA4). MMD-rbf ignored the class structure in data (see FIG8) while MMD-rep learned to concentrate the data from the same class and separate different classes to some extent FIG8. This is because the discriminator D has to actively learn the data structure in order to expands the real sample scores {D(x)}. Thus, we speculate that techniques reinforcing the learning of cluster structures in data may further improve the training of MMD-GAN.In addition, the performance gain of proposed repulsive loss (Eq. 4) over the attractive loss (Eq. 3) comes at no additional computational cost. In fact, by using a single kernel rather than a linear combination of kernels, MMD-rep and MMD-rep-b are simpler than MMD-rbf and MMD-rq. Besides, given a typically small batch size and a small number of discriminator output neurons (64 and 16 in our experiments), the cost of MMD over the non-saturating and hinge loss is marginal compared to the convolution operations. In Appendix D.3, we provide some random samples generated by the methods in our study. This study extends the previous work on MMD-GAN (Li et al. (2017a) ) with two contributions. First, we interpreted the optimization of MMD loss as a combination of attraction and repulsion processes, and proposed a repulsive loss for the discriminator that actively learns the difference among real data. Second, we proposed a bounded Gaussian RBF (RBF-B) kernel to address the saturation issue. Empirically, we observed that the repulsive loss may in unstable training, due to factors including initialization (Appendix A.2), learning rate (FIG7 and Lipschitz constraints on the discriminator (Appendix C.3). The RBF-B kernel managed to stabilize the MMD-GAN training in many cases. Tuning the hyper-parameters in RBF-B kernel or using other regularization methods may further improve our . The theoretical advantages of MMD-GAN require the discriminator to be injective. The proposed repulsive loss (Eq. 4) attempts to realize this by explicitly maximizing the pair-wise distances among the real samples. Li et al. (2017a) achieved the injection property by using the discriminator as the encoder and an auxiliary network as the decoder to reconstruct the real and generated samples, which is more computationally extensive than our proposed approach. On the other hand,; imposed a Lipschitz constraint on the discriminator in MMD-GAN via gradient penalty, which may not necessarily promote an injective discriminator. The idea of repulsion on real sample scores is in line with existing studies. It has been widely accepted that the quality of generated samples can be significantly improved by integrating labels (; ;) or even pseudo-labels generated by k-means method BID9 ) in the training of discriminator. The reason may be that the labels help concentrate the data from the same class and separate those from different classes. Using a pre-trained classifier may also help produce vivid image samples BID14 ) as the learned representations of the real samples in the hidden layers of the classifier tend to be well separated/organized and may produce more meaningful gradients to the generator. At last, we note that the proposed repulsive loss is orthogonal to the GAN studies on designing network structures and training procedures, and thus may be combined with a variety of novel techniques. For example, the ResNet architecture BID11 ) has been reported to outperform the plain DCGAN used in our experiments on image generation tasks (; BID10) and self-attention module may further improve the . On the other hand, proposed to progressively grows the size of both discriminator and generator and achieved the state-of-the-art performance on unsupervised training of GANs on the CIFAR-10 dataset. Future work may explore these directions. This section demonstrates that, under mild assumptions, MMD-GAN trained by gradient descent is locally exponentially stable at equilibrium. It is organized as follows. The main assumption and proposition are presented in Section A.1, followed by simulation study in Section A.2 and proof in Section A.3. We discuss the indications of assumptions on the discriminator of GAN in Section A.4. We consider GAN trained using the MMD loss L DISPLAYFORM0 where Thus in contrast to Assumption 1, we assume Assumption 2. For GANs using MMD loss in Eq. S1, and random initialization on parameters, at equilibrium, DISPLAYFORM1 DISPLAYFORM2 is not constant almost everywhere. We use a simulation study in Section A.2 to show that D θ * D (x) = 0 does not hold in general for MMD loss. Based on Assumption 2, we propose the following proposition and prove it in Appendix A.3: Proposition 1. If there exists θ * G ∈ Θ G such that P θ * G = P X, then GANs with MMD loss in Eq. S1 has equilibria (θ * G, θ D) for any θ D ∈ Θ D. Moreover, the model trained using gradient descent methods is locally exponentially stable at (θ * DISPLAYFORM3 There may exist non-realizable cases where the mapping between P Z and P X cannot be represented by any generator G θ G with θ G ∈ Θ G . In Section A.2, we use a simulation study to show that both the attractive MMD loss L att D (Eq. S1b) and the proposed repulsive loss L rep D (Eq. S1c) may be locally stable and leave the proof for future work. In this section, we reused the example from to show that GAN trained using the MMD loss in Eq. S1 is locally stable. Consider a two-parameter MMD-GAN with uniform latent distribution P Z over [−1, 1], generator G(z) = w 1 z, discriminator D(x) = w 2 x 2, and Gaussian kernel k (a) the data distribution P X is the same as P Z, i.e., uniform over [−1, 1], thus P X is realizable; DISPLAYFORM0 Figure S1: Streamline plots of MMD-GAN using the MMD-rbf and the MMD-rep model on distributions: P Z = U(−1, 1), P X = U(−1, 1) or P X = N. In (a) and (b), the equilibria satisfying P G = P X lie on the line w 1 = 1. In (c), the equilibrium lies around point (1.55, 0.74); in (d), it is around (1.55, 0.32).(b) P X is standard Gaussian, thus non-realizable for any w 1 ∈ R. FIG1 shows that MMD-GAN are locally stable in both cases and D θ * D (x) = 0 does not hold in general for MMD loss. However, MMD-rep may not be globally stable for the tested cases: initialization of (w 1, w 2) in some regions may lead to the trivial solution w 2 = 0 (see FIG1 and S1d). We note that by decreasing the learning rate for G, the area of such regions decreased. At last, it is interesting to note that both MMD-rbf and MMD-rep had the same nontrivial solution w 1 ≈ 1.55 for generator in the non-realizable cases (see FIG1 and S1d). This section divides the proof for Proposition 1 into two parts. First, we show that GAN with the MMD loss in Eq. S1 has equilibria for any parameter configuration of discriminator D; second, we prove the model is locally exponentially stable. For convenience, we consider the general form of discriminator loss in Eq. 5: DISPLAYFORM0 which has L att D and L rep D as the special cases when λ equals −1 and 1 respectively. Consider real data X r ∼ P X, latent variable Z ∼ P Z and generated variable Y g = G θ G (Z). Let x r, z, y g be their samples. Denote ∇. DISPLAYFORM1 where L D and L G are the losses for D and G respectively. Assume an isotropic stationary kernel k(a, b) = k I (a − b) BID6 ) is used in MMD. We first show:Proposition 1 (Part 1). If there exists θ * G ∈ Θ G such that P θ * G = P X, the GAN with the MMD loss in Eq. S1a and Eq. S2 has equilibria (θ * DISPLAYFORM2 where k is the kernel of MMD. The gradients of MMD loss are DISPLAYFORM3 ∼ P G, an unbiased estimator of the squared MMD is BID8) We proceed to prove the model stability. First, following Theorem 5 in BID8 and Theorem 4 in Li et al. (2017a), it is straightforward to see: FORMULA13 ). Consider a non-linear system of parameters (θ, γ): θ = h 1 (θ, γ),γ = h 2 (θ, γ) with an equilibrium point at. Let there exist such that ∀γ ∈ DISPLAYFORM4 DISPLAYFORM5 DISPLAYFORM6 is a Hurwitz matrix, the non-linear system is exponentially stable. Proposition 1 (Part 2). At equilibrium P θ * G = P X, the GAN trained using MMD loss and gradient descent methods is locally exponentially stable at (θ DISPLAYFORM0 ∂b∂c . Based on Eq. S3, we have DISPLAYFORM1 where ⊗ is the kronecker product. At equilibrium, consider a sequence of N samples DISPLAYFORM2 DISPLAYFORM3 Given Lemma A.1 and fact that J GG is the Hessian matrix of M DISPLAYFORM4 is locally constant along some directions in the parameter space of G. As a , null(J GG) ⊆ null(J DG) because varying θ * G along these directions has no effect on D. Following Lemma C.3 of , we consider eigenvalue decomposition DISPLAYFORM5. Thus, the projections γ G = T G θ G are orthogonal to null(J GG). Then, the Jacobian corresponding to the projected system has the form DISPLAYFORM6, where J GG is negative definite. Moreover, on all directions exclude those described by J GG, the system is surrounded by a neighborhood of equilibia at least locally. According to Lemma A.2, the system is exponentially stable. This section shows that constant discriminator output DISPLAYFORM0 may have no discrimination power. First, we make the following assumptions:Assumption 3. 1. D is a multilayer perceptron where each layer l can be factorized into an affine transform and an element-wise activation function f l. 2. Each activation function f l ∈ C 0; furthermore, f l has a finite number of discontinuities and f l ∈ C 06. 3. Input data to D is continuous and its support S is compact in R d with non-zero measure in each dimension and d > 1 7.Based on Assumption 3, we have the following proposition:Proposition 2. If ∀x ∈ S, D(x) = c, where c is constant, then there always exists distortion δx such that x + δx ∈ S and D(x + δx) = c. are model weights and biases, f is an activation function satisfying Assumption 3. For x ∈ S, since D(x) = c, we have h(x) ∈ null(W 2). Furthermore: DISPLAYFORM0 has unique solution for any k ∈ R as long as k · h(x) is within the output range of f. DISPLAYFORM1 and n is the nullity of W 2. Let the projected support beŜ. Thus, DISPLAYFORM2 T with z c = 0.Consider the Jacobian: DISPLAYFORM3 where DISPLAYFORM4 is the input to activation, or pre-activations. SinceŜ is continuous and compact, it has infinite number of boundary points {x b} for d > 1. Consider one boundary pointx b and its normal line δx b. Let > 0 be a small scalar such thatx b − δx b ∈ S andx b + δx b ∈Ŝ.• For linear activation, ∇Σ = I and J is constant. Then z c remains 0 DISPLAYFORM5 there exists z such that h(x + δx) ∈ null(W 2).• For nonlinear activations, assume f has N discontinuities. Since U x T 0 T T + b 1 = c has unique solution for any vector c, the boundary points {x b} cannot yield pre-activations {a b} that all lie on the discontinuities in any of the d h directions. Though we might need to sample d N +1 h points in the worst case to find an exception, there are infinite number of exceptions. Letx b be a sample where {a b} does not lie on the discontinuities in any direction. Because f is continuous, z c remains 0 forx b + δx b, i.e., there exists z such that h(x + δx) ∈ null(W 2).In , we can always find δx such that x + δx / ∈ S and D(x + δx) = c. cannot discriminate against fake samples with distortions to the original data. In contrast, Assumption 2 and Lemma A.1 guarantee that, at equilibrium, the discriminator trained using MMD loss function is effective against such fake samples given a large number of i.i.d. test samples BID8 ). Several loss functions have been proposed to quantify the difference between real and generated sample scores, including: (assume linear activation is used at the last layer of D)• The Minimax loss BID7 ): Softplus(D(G(z) ))] and L G = −L D, which can be derived from the Jensen-Shannon (JS) divergence between P X and the model distribution P G. DISPLAYFORM0 • The non-saturating loss BID7 ), which is a variant of the minimax loss with the same L D and DISPLAYFORM1 • The Hinge loss : DISPLAYFORM2, which is notably known for usage in support vector machines and is related to the total variation (TV) distance .• The Wasserstein loss; BID10 ), which is derived from the Wasserstein distance between P X and P G: DISPLAYFORM3, where D is subject to some Lipschitz constraint.• The maximum mean discrepancy (MMD) (Li et al. (2017a); ), as described in Section 2. For unsupervised image generation tasks on CIFAR-10 and STL-10 datasets, the DCGAN architecture from was used. For CelebA and LSUN bedroom datasets, we added more layers to the generator and discriminator accordingly. See TAB0 and S2 for details. TAB0: DCGAN models for image generation on CIFAR-10 (h = w = 4, H = W = 32) and STL-10 (h = w = 6, H = W = 48) datasets. For non-saturating loss and hinge loss, s = 1; for MMD-rand, MMD-rbf, MMD-rq, s = 16. For a weight matrix W, the spectral norm is defined as σ(W) = max v 2 ≤1 W v 2. The PIM is used to estimate σ(W) , which iterates between two steps: DISPLAYFORM0 The convolutional kernel W c is a tensor of shape h × w × c in × c out with h, w the receptive field size and c in, c out the number of input/output channels. To estimate σ(W c), reshaped it into a matrix W rs of shape (hwc in) × c out and estimated σ(W rs).We propose a simple method to calculate W c directly based on the fact that convolution operation is linear. For any linear map T: R m → R n, there exists matrix W L ∈ R n×m such that y = T (x) can be represented as y = W L x. Thus, we may simply substitute W L = ∂y ∂x in the PIM method to estimate the spectral norm of any linear operation. In the case of convolution operation *, there exist doubly block circulant matrix DISPLAYFORM1 T u which is essentially the transpose convolution of W c on u BID4 ). Thus, similar to PIM, PICO iterates between the following two steps: DISPLAYFORM2 2. Do transpose convolution of W c on u to getv; update v =v/ v 2.Similar approaches have been proposed in and from different angles, which we were not aware during this study. In addition, proposes to compute the exact singular values of convolution kernels using FFT and SVD. In spectral normalization, only the first singular value is concerned, making the power iteration methods PIM and PICO more efficient than FFT and thus preferred in our study. However, we believe the exact method FFT+SVD may eventually inspire more rigorous regularization methods for GAN.The proposed PICO method estimates the real spectral norm of a convolution kernel at each layer, thus enforces an upper bound on the Lipschitz constant of the discriminator D. Denote the upper bound as LIP PICO. In this study, Leaky ReLU (LReLU) was used at each layer of D, thus LIP PICO ≈ 1 . In practice, however, PICO would often cause the norm of the signal passing through D to decrease to zero, because at each layer,• the signal hardly coincides with the first singular-vector of the convolution kernel; and• the activation function LReLU often reduces the norm of the signal. Consequently, the discriminator outputs tend to be similar for all the inputs. To compensate the loss of norm at each layer, the signal is multiplied by a constant C after each spectral normalization. This essentially enlarges LIP PICO by C K where K is the number of layers in the DCGAN discriminator. For all experiments in Section 5, we fixed C = 1 0.55 ≈ 1.82 as all loss functions performed relatively well empirically. In Appendix Section C.3, we tested the effects of coefficient C K on the performance of several loss functions. PIM also enforces an upper bound LIP PIM on the Lipschitz constant of the discriminator D. Consider a convolution kernel W c with receptive field size h × w and stride s. Let σ PICO and σ PIM be the spectral norm estimated by PICO and PIM respectively. We empirically In this section, we empirically evaluate the effects of coefficient C K on the performance of PICO and compare PICO against PIM using several loss functions. We used a similar setup as Section 5.1 with the following adjustments. Four loss functions were tested: hinge, MMD-rbf, MMD-rep and MMD-rep-b. Either PICO or PIM was used at each layer of the discriminator. For PICO, five coefficients C K were tested: 16, 32, 64, 128 and 256 (note this is the overall coefficient for K layers; K = 8 for CIFAR-10 and STL-10; K = 10 for CelebA and LSUN-bedroom; see Appendix B.2). FID was used to evaluate the performance of each combination of loss function and power iteration method, e.g., hinge + PICO with C K = 16.Results: For each combination of loss function and power iteration method, the distribution of FID scores over 16 learning rate combinations is shown in FIG0. We separated well-performed learning rate combinations from diverged or poorly-performed ones using a threshold τ as the diverged cases often had non-meaningful FID scores. The boxplot shows the distribution of FID scores for goodperformed cases while the number of diverged or poorly-performed cases was shown above each box if it is non-zero. 1) When PICO was used, the hinge, MMD-rbf and MMD-rep methods were sensitive to the choices of C K while MMD-rep-b was robust. For hinge and MMD-rbf, higher C K may in better FID scores and less diverged cases over 16 learning rate combinations. For MMD-rep, higher C K may cause more diverged cases; however, the best FID scores were often achieved with C K = 64 or 128.2) For CIFAR-10, STL-10 and CelebA datasets, PIM performed comparable to PICO with C K = 128 or 256 on four loss functions. For LSUN bedroom dataset, it is likely that the performance of PIM corresponded to that of PICO with C K > 256. This implies that PIM may in a relatively loose Lipschitz constraint on deep convolutional networks.3) MMD-rep-b performed generally better than hinge and MMD-rbf with tested power iteration methods and hyper-parameter configurations. Using PICO, MMD-rep also achieved generally better FID scores than hinge and MMD-rbf. This implies that, given a limited computational budget, the proposed repulsive loss may be a better choice than the hinge and MMD loss for the discriminator. TAB2 shows the best FID scores obtained by PICO and PIM where C K was fixed at 128 for hinge and MMD-rbf, and 64 for MMD-rep and MMD-rep-b. For hinge and MMD-rbf, PICO performed significantly better than PIM on the LSUN-bedroom dataset and comparably on the rest datasets. For MMD-rep and MMD-rep-b, PICO achieved consistently better FID scores than PIM.However, compared to PIM, PICO has a higher computational cost which roughly equals the additional cost incurred by increasing the batch size by two . This may be problematic when a small batch has to be used due to memory constraints, e.g., when handling high resolution images on a single GPU. Thus, we recommend using PICO when the computational cost is less of a concern. D SUPPLEMENTARY EXPERIMENTS D.1 LIPSCHITZ CONSTRAINT VIA GRADIENT PENALTY Gradient penalty has been widely used to impose the Lipschitz constraint on the discriminator arguably since Wasserstein GAN BID10 ). This section explores whether the proposed repulsive loss can be applied with gradient penalty. Several gradient penalty methods have been proposed for MMD-GAN. penalized the gradient norm of witness function y) ] w.r.t. the interpolated sample z = ux + (1 − u)y to one, where u ∼ U 9. More recently, proposed to impose the Lipschitz constraint on the mapping φ • D directly and derived the Scaled MMD (SMMD) as SM k (P, Q) = σ µ,k,λ M k (P, Q), where the scale σ µ,k,λ incorporates gradient and smooth penalties. Using the Gaussian kernel and measure µ = P X leads to the discriminator loss: DISPLAYFORM0 DISPLAYFORM1 We apply the same formation of gradient penalty to the repulsive loss: DISPLAYFORM2 where the numerator L rep D − 1 ≤ 0 so that the discriminator will always attempt to minimize both L rep D and the Frobenius norm of gradients ∇D(x) w.r.t. real samples. Meanwhile, the generator is trained using the MMD loss L mmd G (Eq. 2).Experiment setup: The gradient-penalized repulsive loss L rep-gp D (Eq. S8, referred to as MMD-repgp) was evaluated on the CIFAR-10 dataset. We found λ = 10 in too restrictive 9 Empirically, we found this gradient penalty did not work with the repulsive loss. The reason may be the attractive loss L att D (Eq. 3) is symmetric in the sense that swapping P X and PG in the same loss; while the repulsive loss is asymmetric and naturally in varying gradient norms in data space. and used λ = 0.1 instead. Same as, the output dimension of discriminator was set to one. Since we entrusted the Lipschitz constraint to the gradient penalty, spectral normalization was not used. The rest experiment setup can be found in Section 5.1. TAB3 shows that the proposed repulsive loss can be used with gradient penalty to achieve reasonable on CIFAR-10 dataset. For comparison, we cited the Inception score and FID for Scaled MMD-GAN (SMMDGAN) and Scaled MMD-GAN with spectral normalization (SN-SMMDGAN) from. Note that SMMDGAN and SN-SMMDGAN used the same DCGAN architecture as MMD-rep-gp, but were trained for 150k generator updates and 750k discriminator updates, much more than that of MMD-rep-gp (100k for both G and D). Thus, the repulsive loss significantly improved over the attractive MMD loss for discriminator. In this section, we investigate the impact of the output dimension of discriminator on the performance of repulsive loss. Experiment setup: We used a similar setup as Section 5.1 with the following adjustments. The repulsive loss was tested on the CIFAR-10 dataset with a variety of discriminator output dimensions: d ∈ {1, 4, 16, 64, 256}. Spectral normalization was applied to discriminator with the proposed PICO method (see Appendix C) and the coefficients C K selected from {16, 32, 64, 128, 256}.Results: TAB4 shows that using more than one output neuron in the discriminator D significantly improved the performance of repulsive loss over the one-neuron case on CIFAR-10 dataset. The reason may be that using insufficient output neurons makes it harder for the discriminator to learn an injective and discriminative representation of the data (see FIG8). However, the performance gain diminished when more neurons were used, perhaps because it becomes easier for D to surpass the generator G and trap it around saddle solutions. The computation cost also slightly increased due to more output neurons. Generated samples on CelebA dataset are given in FIG7 and LSUN bedrooms in FIG8. Spectral normalization was applied to discriminator with two power iteration methods: PICO and PIM. For PICO, five coefficients C K were tested: 16, 32, 64, 128, and 256. A learning rate combination was considered diverged or poorly-performed if the FID score exceeded a threshold τ, which is 50, 80, 50, 90 for CIFAR-10, STL-10, CelebA and LSUN-bedroom respectively. The box quartiles were plotted based on the cases with FID < τ while the number of diverged or poorly-performed cases (out of 16 learning rate combinations) was shown above each box if it is non-zero. We introduced τ because the diverged cases often had arbitrarily large and non-meaningful FID scores. DISPLAYFORM0 | Rearranging the terms in maximum mean discrepancy yields a much better loss function for the discriminator of generative adversarial nets | 632 | scitldr |
Deep neural networks have shown incredible performance for inference tasks in a variety of domains. Unfortunately, most current deep networks are enormous cloud-based structures that require significant storage space, which limits scaling of deep learning as a service (DLaaS) and use for on-device augmented intelligence. This paper finds algorithms that directly use lossless compressed representations of deep feedforward networks (with synaptic weights drawn from discrete sets), to perform inference without full decompression. The basic insight that allows less rate than naive approaches is the recognition that the bipartite graph layers of feedforward networks have a kind of permutation invariance to the labeling of nodes, in terms of inferential operation and that the inference operation depends locally on the edges directly connected to it. We also provide experimental of our approach on the MNIST dataset. Deep learning has achieved incredible performance for inference tasks such as speech recognition, image recognition, and natural language processing. Most current deep neural networks, however, are enormous cloud-based structures that are too large and too complex to perform fast, energyefficient inference on device or for scaling deep learning as a service (DLaaS). Compression, with the capability of providing inference without full decompression, is important. Universal source coding for feedforward deep networks having synaptic weights drawn from finite sets that essentially achieve the entropy lower bound were introduced in BID0. Here, we provide-for the first time-an algorithm that directly uses these compressed representations for inference tasks without complete decompression. Structures that can represent information near the entropy bound while also allowing efficient operations on them are called succinct structures (2; 3; 4). Thus, we provide a succinct structure for feedforward neural networks, which may fit on-device and enable scaling of DLaaS.Related Work: There has been recent interest in compact representations of neural networks (5; 6; 7; 8; 9; 10; 11; 12; 13; 14). While most of these algorithms are lossy, we provide an efficient lossless algorithm, which can be used on top of any lossy algorithm that quantizes or prunes network weights; prior work on lossless compression of neural networks either used Huffman coding in a way that did not exploit invariances or was not succinct and required full decompression for inference. The proposed algorithm builds on the sublinear entropy-achieving representation in but is the first time succinctness-the further ability to perform inference with negligible space needed for partial decompression-has been attempted or achieved. Our inference algorithm is similar to arithmetic decoding and so computational performance is also governed by efficient implementations of arithmetic coding. Efficient high-throughput implementations of arithmetic coding/decoding have been developed for video, e.g. as part of the H.264/AVC and HEVC standards (15; 16). Let us describe the neural network model considered in which will be used here to develop succinct structures of deep neural networks. In a feedforward neural network, each node j com- putes an activation function g(·) applied to the weighted sum of its inputs, which we can note is a permutation-invariant function: DISPLAYFORM0, for any permutation π. Consider a feedforward neural network with K − 1 hidden layers where each node contains N nodes (for notational convenience) such that the nodes in all the K − 1 hidden layers are indistinguishable from each other (when edges are ignored) but the nodes in the input and output layers are labeled and can be distinguished. There is an edge of color i, i = 0,..., m, between any two nodes from two different layers independently with probability p i, where p 0 is the probability of no edge. Consider a substructure: partially-labeled bipartite graphs, see FIG1, which consists of two sets of vertices containing N vertices each with one of the sets containing labeled vertices and the other set containing unlabeled vertices. An edge of color i exists between any two nodes taken one from each set with probability p i, i = 0,..., m where p 0 is the probability of no edge. Refer to for detailed discussion on the structure. To construct the K-layer neural network, think of it as made of a partially-labeled bipartite graph for the first two layers but then each time the nodes of an unlabeled layer are connected, we treat it as a labeled layer, based on its connection to the previous labeled layer (i.e. we can label the unlabeled nodes based on the nodes of the previous layer it is connected to), and iteratively complete the K-layer neural network. First, we consider the succinct representation of a partially labeled bipartite graph, followed by that of a K-layered neural network. Alg. 1 is an inference algorithm for a partially-labeled bipartite graph with input to the graph X and output Y. Later we use this algorithm to make inferences in a K-layered neural network where outputs of unlabeled layers correspond to outputs of a hidden layer. The optimally compressed representation of a partially-labeled bipartite graph produced by (1, Alg.1) is taken as an input by Alg. 1, in addition to the input X to the graph, and the output Y of the graph is given out. If the graph has N nodes in each layer, then only an additional O(N) bits of dynamic space is required by Alg. 1 for the inference task while it takes O(N 2) bits to store the representation and hence the structure in succinct as discussed below. Lemma 1. Output Y obtained from Alg. 1 is a permutation ofỸ, the output from the uncompressed neural network representation. Proof. Say, we have an m × 1 vector X to be multiplied with an m × n weight matrix W, to get the outputỸ, an n × 1 vector. Then,Ỹ = W T X, and so the jth element ofỸ, DISPLAYFORM0 In Alg. 1, while traversing a particular depth i, we multiply all Y j s with X i W i,j and hence when we reach depth N, we get the Y vector as required. The change in permutation ofỸ with respect to Y is because while compressing W, we do not encode the permutation of the columns, retaining the row permutation. Proof. The major dynamic space requirement is for decoding of individual nodes, and the queue, Q. Clearly, the space required for Q, is much more than the space required for decoding a single node. We show the expected space complexity corresponding to Q is less than or equal to 2(m + 1)N (1 + 2 log 2 ( m+2 m+1)) using Elias-Gamma integer codes for each entry in Q. Note that Q has nodes from at most two consecutive depths, and since only the child nodes of non-zero nodes are encoded, and the number of non-zero nodes at any depth is less than N, we can have a maximum of 2(m + 1)N nodes encoded in Q. Let α 0,..., α k be the non-zero tree nodes at some depth d of the tree, where k = (m+1)N. Let S be the total space required to store Q. Using integer codes, we can encode any positive number x in 2 log 2 (x) + 1 bits, and to allow 0, we need 2 log 2 (x + 1) + 1 bits. Thus, the Set i = 0. Set f = the first element obtained after dequeuing Q. while i ≤ m and f > 0 do 7: decode the child node of f corresponding to color i and store it as c. Encode c back in L 1. Enqueue c in Q. Add x l × w i to each of y j to y (j+c). Add c to j. if j = 1, at least one non-zero node has been processed at the current depth then 11: DISPLAYFORM0 end if 13: end while 14: end while 15: Update the Y vector using the required activation function.. Thus, the structure is succinct. Now consider the structure of the K-layered neural network as in Sec. 2 and provide its succinct representation. The extra dynamic space for K-layers remains the same as for 2-layers as described in Alg. 1 as inference is done one layer at a time. Theorem 3. The compressed structure obtained by the iterative use of (1, Alg. 1) is succinct. We trained a feedforward neural network of dimension 784 × 50 × 50 × 50 × 50 × 10 on the MNIST dataset using gradient descent algorithm to get 98.4% accuracy on the test data. Network weights were quantized using a uniform quantizer into 33 steps to get a network with an accuracy of 97.5% on the training data and an accuracy of 93.48% on the test data. The weight matrices from the second to the last layer were rearranged based on the weight matrices corresponding to the previous layers as needed for Alg. 1 to work. These matrices, except the last matrix connected to the output, were compressed using (1, Alg. 1) to get the compressed network, and arithmetic coding was implemented by modification of an existing implementation. The compressed network performs exactly as the quantized network as it should, since we compress losslessly. We observe that the extra memory required for inference is negligible compared to the size of the compressed network. Detailed from the experiment and dynamic space requirements are described in TAB0, where H(p) is the empirical entropy calculated from the weight matrices. | This paper finds algorithms that directly use lossless compressed representations of deep feedforward networks, to perform inference without full decompression. | 633 | scitldr |
Generative adversarial networks (GANs) form a generative modeling approach known for producing appealing samples, but they are notably difficult to train. One common way to tackle this issue has been to propose new formulations of the GAN objective. Yet, surprisingly few studies have looked at optimization methods designed for this adversarial training. In this work, we cast GAN optimization problems in the general variational inequality framework. Tapping into the mathematical programming literature, we counter some common misconceptions about the difficulties of saddle point optimization and propose to extend methods designed for variational inequalities to the training of GANs. We apply averaging, extrapolation and a computationally cheaper variant that we call extrapolation from the past to the stochastic gradient method (SGD) and Adam. Generative adversarial networks (GANs) BID12 ) form a generative modeling approach known for producing realistic natural images as well as high quality super-resolution and style transfer . Nevertheless, GANs are also known to be difficult to train, often displaying an unstable behavior BID11. Much recent work has tried to tackle these training difficulties, usually by proposing new formulations of the GAN objective (; . Each of these formulations can be understood as a two-player game, in the sense of game theory , and can be addressed as a variational inequality problem (VIP) BID15, a framework that encompasses traditional saddle point optimization algorithms .Solving such GAN games is traditionally approached by running variants of stochastic gradient descent (SGD) initially developed for optimizing supervised neural network objectives. Yet it is known that for some games (, §8. 2) SGD exhibits oscillatory behavior and fails to converge. This oscillatory behavior, which does not arise from stochasticity, highlights a fundamental problem: while a direct application of basic gradient descent is an appropriate method for regular minimization problems, it is not a sound optimization algorithm for the kind of two-player games of GANs. This constitutes a fundamental issue for GAN training, and calls for the use of more principled methods with more reassuring convergence guarantees. Contributions. We point out that multi-player games can be cast as variational inequality problems (VIPs) and consequently the same applies to any GAN formulation posed as a minimax or non-zerosum game. We present two techniques from this literature, namely averaging and extrapolation, widely used to solve VIPs but which have not been explored in the context of GANs before. 1 We extend standard GAN training methods such as SGD or Adam into variants that incorporate these techniques (Alg. 4 is new). We also explain that the oscillations of basic SGD for GAN training previously noticed BID11 can be explained by standard variational inequality optimization and we illustrate how averaging and extrapolation can fix this issue. We introduce a technique, called extrapolation from the past, that only requires one gradient computation per update compared to extrapolation which requires to compute the gradient twice, rediscovering, with a VIP perspective, a particular case of optimistic mirror descent . We prove its convergence for strongly monotone operators and in the stochastic VIP setting. Finally, we test these techniques in the context of GAN training. We observe a 4-6% improvement over on the inception score and the Fréchet inception distance on the CIFAR-10 dataset using a WGAN-GP BID14 ) and a ResNet generator. Outline. §2 presents the on GAN and optimization, and shows how to cast this optimization as a VIP. §3 presents standard techniques and extrapolation from the past to optimize variational inequalities in a batch setting. §4 considers these methods in the stochastic setting, yielding three corresponding variants of SGD, and provides their respective convergence rates. §5 develops how to combine these techniques with already existing algorithms. §6 discusses the related work and §7 presents experimental . The purpose of generative modeling is to generate samples from a distribution q θ that matches best the true distribution p of the data. The generative adversarial network training strategy can be understood as a game between two players called generator and discriminator. The former produces a sample that the latter has to classify between real or fake data. The final goal is to build a generator able to produce sufficiently realistic samples to fool the discriminator. In the original GAN paper BID12, the GAN objective is formulated as a zero-sum game where the cost function of the discriminator D ϕ is given by the negative log-likelihood of the binary classification task between real or fake data generated from q θ by the generator, However BID12 recommends to use in practice a second formulation, called non-saturating GAN. This formulation is a non-zero-sum game where the aim is to jointly minimize: DISPLAYFORM0 The dynamics of this formulation has the same stationary points as the zero-sum one but is claimed to provide "much stronger gradients early in learning" BID12. The minimax formulation is theoretically convenient because a large literature on games studies this problem and provides guarantees on the existence of equilibria. Nevertheless, practical considerations lead the GAN literature to consider a different objective for each player as formulated in. In that case, the two-player game problem consists in finding the following Nash equilibrium: DISPLAYFORM0 Only when L G = −L D is the game called a zero-sum game and can be formulated as a minimax problem. One important point to notice is that the two optimization problems in are coupled and have to be considered jointly from an optimization point of view. Standard GAN objectives are non-convex (i.e. each cost function is non-convex), and thus such (pure) equilibria may not exist. As far as we know, not much is known about the existence of these equilibria for non-convex losses (see BID17 and references therein for some ). In our theoretical analysis in §4, our assumptions (monotonicity of the operator and convexity of the constraint set) imply the existence of an equilibrium. In this paper, we focus on ways to optimize these games, assuming that an equilibrium exists. As is often standard in non-convex optimization, we also focus on finding points satisfying the necessary stationary conditions. As we mentioned previously, one difficulty that emerges in the optimization of such games is that the two different cost functions of have to be minimized jointly in θ and ϕ. Fortunately, the optimization literature has for a long time studied so-called variational inequality problems, which generalize the stationary conditions for two-player game problems. We first consider the local necessary conditions that characterize the solution of the smooth two-player game, defining stationary points, which will motivate the definition of a variational inequality. In the unconstrained setting, a stationary point is a couple (θ *, ϕ *) with zero gradient: DISPLAYFORM0 When constraints are present, 3 a stationary point (θ *, ϕ *) is such that the directional derivative of each cost function is non-negative in any feasible direction (i.e. there is no feasible descent direction): DISPLAYFORM1 Defining ω def = (θ, ϕ), ω * def = (θ *, ϕ *), Ω def = Θ × Φ, Eq. can be compactly formulated as: DISPLAYFORM2 These stationary conditions can be generalized to any continuous vector field: let Ω ⊂ R d and F: Ω → R d be a continuous mapping. The variational inequality problem BID15 ) (depending on F and Ω) is: DISPLAYFORM3 We call optimal set the set Ω * of ω ∈ Ω verifying (VIP). The intuition behind it is that any ω * ∈ Ω * is a fixed point of the constrained dynamic of F (constrained to Ω).We have thus showed that both saddle point optimization and non-zero sum game optimization, which encompass the large majority of GAN variants proposed in the literature, can be cast as VIPs. In the next section, we turn to suitable optimization techniques for such problems. Let us begin by looking at techniques that were developed in the optimization literature to solve VIPs. We present the intuitions behind them as well as their performance on a simple bilinear problem (see FIG1). Our goal is to provide mathematical insights on averaging (§3.1) and extrapolation (§3.2) and propose a novel variant of the extrapolation technique that we called extrapolation from the past (§3.3). We consider the batch setting, i.e., the operator F (ω) defined in Eq. 6 yields an exact full gradient. We present extensions of these techniques to the stochastic setting later in §4.The two standard methods studied in the VIP literature are the gradient method BID3 and the extragradient method . The iterates of the basic gradient method are given by DISPLAYFORM0 is the projection onto the constraint set (if constraints are present) associated to (VIP). These iterates are known to converge linearly under an additional assumption on the operator 4 BID4, but oscillate for a bilinear operator as shown in FIG1. On the other hand, the uniform average of these iterates converge for any bounded monotone operator with a O(1/ √ t) rate (Nedić and), motivating the presentation of averaging in §3.1. By contrast, the extragradient method (extrapolated gradient) does not require any averaging to converge for monotone operators (in the batch setting), and can even converge at the faster O(1/t) rate . The idea of this method is to compute a lookahead step (see intuition on extrapolation in §3.2) in order to compute a more stable direction to follow. More generally, we consider a weighted averaging scheme with weights ρ t ≥ 0. This weighted averaging scheme have been proposed for the first time for (batch) VIP by BID3, DISPLAYFORM0 Averaging schemes can be efficiently implemented in an online fashion noticing that, DISPLAYFORM1 For instance, settingρ T = 1 T yields uniform averaging (ρ t = 1) andρ t = 1 − β < 1 yields geometric averaging, also known as exponential moving averaging (ρ t = β T −t, 1 ≤ t ≤ T). Averaging is experimentally compared with the other techniques presented in this section in FIG1.In order to illustrate how averaging tackles the oscillatory behavior in game optimization, we consider a toy example where the discriminator and the generator are linear: D ϕ (x) = ϕ T x and G θ (z) = θz (implicitly defining q θ). By substituting these expressions in the WGAN objective, 5 we get the following bilinear objective: min DISPLAYFORM2 A similar task was presented by where they consider a quadratic discriminator instead of a linear one, and show that gradient descent is not necessarily asymptotically stable. The bilinear objective has been extensively used BID11; ) to highlight the difficulties of gradient descent for saddle point optimization. Yet, ways to cope with this issue have been proposed decades ago in the context of mathematical programming. For illustrating the properties of the methods of interest, we will study their behavior in the rest of §3 on a simple unconstrained unidimensional version of Eq. 9 (this behavior can be generalized to general multidimensional bilinear examples, see §B.3): min DISPLAYFORM3 The operator associated with this minimax game is F (θ, φ) = (φ, −θ). There are several ways to compute the discrete updates of this dynamics. The two most common ones are the simultaneous and the alternating gradient update rules, Simultaneous update: DISPLAYFORM4 Interestingly, these two choices give rise to completely different behaviors. The norm of the simultaneous updates diverges geometrically, whereas the alternating iterates are bounded but do not converge to the equilibrium. As a consequence, their respective uniform average have a different behavior, as highlighted in the following proposition (proof in §B.1 and generalization in §B.3): Proposition 1. The simultaneous iterates diverge geometrically and the alternating iterates defined in are bounded but do not converge to 0 as DISPLAYFORM5 The uniform average (θ t,φ t) def = 1 t t−1 s=0 (θ s, φ s) of the simultaneous updates (resp. the alternating updates) diverges (resp. converges to 0) as, DISPLAYFORM6 This sublinear convergence , proved in §B, underlines the benefits of averaging when the sequence of iterates is bounded (i.e. for alternating update rule). When the sequence of iterates is not bounded (i.e. for simultaneous updates) averaging fails to ensure convergence. This theorem also shows how alternating updates may have better convergence properties than simultaneous updates. Another technique used in the variational inequality literature to prevent oscillations is extrapolation. This concept is anterior to the extragradient method since mentions that the idea of extrapolated "prices" to give "stability" had been already formulated by Polyak (1963, Chap. II). The idea behind this technique is to compute the gradient at an (extrapolated) point different from the current point from which the update is performed, stabilizing the dynamics: DISPLAYFORM0 Perform update step: DISPLAYFORM1 Note that, even in the unconstrained case, this method is intrinsically different from Nesterov's momentum 6 (, Eq. 2.2.9) because of this lookahead step for the gradient computation: DISPLAYFORM2 Nesterov's method does not converge when trying to optimize. One intuition of why extrapolation has better convergence properties than the standard gradient method comes from Euler's integration framework. Indeed, to first order, we have ω t+1/2 ≈ ω t+1 + o(η) and consequently, the update step can be interpreted as a first order approximation to an implicit method step: DISPLAYFORM3 Implicit methods are known to be more stable and to benefit from better convergence properties BID1 than explicit methods, e.g., in §B.2 we show that on converges for any η. Though, they are usually not practical since they require to solve a potentially non-linear system at each step. Going back to the simplified WGAN toy example from §3.1, we get the following update rules:Implicit: DISPLAYFORM4 In the following proposition, we see that for η < 1, the respective convergence rates of the implicit method and extrapolation are highly similar. Keeping in mind that the latter has the major advantage of being more practical, this proposition clearly underlines the benefits of extrapolation. Note that Prop. 1 and 2 generalize to general unconstrained bilinear game (more details and proof in §B.3), Proposition 2. The squared norm of the iterates DISPLAYFORM5 t, where the update rule of θ t and φ t are defined in, decreases geometrically for any η < 1 as, DISPLAYFORM6 One issue with extrapolation is that the algorithm "wastes" a gradient. Indeed we need to compute the gradient at two different positions for every single update of the parameters. We thus propose a technique that we call extrapolation from the past that only requires a single gradient computation per update. The idea is to store and re-use the extrapolated gradient for the extrapolation:Extrapolation from the past: DISPLAYFORM0 Perform update step: DISPLAYFORM1 The same update scheme was proposed by Chiang et al. (2012, Alg. 1) in the context of online convex optimization and generalized by for general online learning. Without projection, FORMULA1 and FORMULA0 reduce to the optimistic mirror descent described by BID7: DISPLAYFORM2 We rediscovered this technique from a different perspective: it was motivated by VIP and inspired from the extragradient method. Using the VIP point of view, we are able to prove a linear convergence rate for extrapolation from the past (see details and proof of Theorem 1 in §B.4). We also provide for a stochastic version in §4. In comparison to the from BID7 that Adam) with the techniques presented in §3 on the optimization of. Only the algorithms advocated in this paper (Averaging, Extrapolation and Extrapolation from the past) converge quickly to the solution. Each marker represents 20 iterations. We compare these algorithms on a non-convex objective in §G.1. DISPLAYFORM3 Figure 2: Three variants of SGD computing T updates, using the techniques introduced in §3.hold only for a bilinear objective, we provide a faster convergence rate (linear vs sublinear) on the last iterate for a general (strongly monotone) operator F and any projection on a convex Ω. One thing to notice is that the operator of a bilinear objective is not strongly monotone, but in that case one can use the standard extrapolation method which converges linearly for a (constrained or not) bilinear game (, Cor. 3.3). Theorem 1 (Linear convergence of extrapolation from the past). If F is µ-strongly monotone (see §A for the definition of strong monotonicity) and L-Lipschitz, then the updates and with η = 1 4L provide linearly converging iterates, DISPLAYFORM4 In this section, we consider extensions of the techniques presented in §3 to the context of a stochastic operator, i.e., we no longer have access to the exact gradient F (ω) but to an unbiased stochastic estimate of it, F (ω, ξ), where ξ ∼ P and DISPLAYFORM0 It is motivated by GAN training where we only have access to a finite sample estimate of the expected gradient, computed on a mini-batch. For GANs, ξ is a mini-batch of points coming from the true data distribution p and the generator distribution q θ.For our analysis, we require at least one of the two following assumptions on the stochastic operator: DISPLAYFORM1 Assumption 2. Bounded expected squared norm by DISPLAYFORM2 Assump. 1 is standard in stochastic variational analysis, while Assump. 2 is a stronger assumption sometimes made in stochastic convex optimization. To illustrate how strong Assump. 2 is, note that it does not hold for an unconstrained bilinear objective like in our example FORMULA0 We now present and analyze three algorithms that are variants of SGD that are appropriate to solve (VIP). The first one Alg. 1 (AvgSGD) is the stochastic extension of the gradient method for solving (VIP); Alg. 2 (AvgExtraSGD) uses extrapolation and Alg. 3 (AvgPastExtraSGD) uses extrapolation from the past. A fourth variant that re-use the mini-batch for the extrapolation step (ReExtraSGD, Alg. 5) is described in §D. These four algorithms return an average of the iterates (typical in stochastic setting). The proofs of the theorems presented in this section are in §F.To handle constraints such as parameter clipping, we gave a projected version of these algorithms, where P Ω [ω] denotes the projection of ω onto Ω (see §A). Note that when Ω = R d, the projection is the identity mapping (unconstrained setting). In order to prove the convergence of these four algorithms, we will assume that F is monotone: DISPLAYFORM3 If F can be written as, it implies that the cost functions are convex.7 Note however that general GANs parametrized with neural networks lead to non-monotone VIPs. Assumption 3. F is monotone and Ω is a compact convex set, such that max ω,ω ∈Ω ω−ω 2 ≤ R 2.In that setting the quantity g(ω *):= max ω∈Ω F (ω) (ω * − ω) is well defined and is equal to 0 if and only if ω * is a solution of (VIP). Moreover, if we are optimizing a zero-sum game, DISPLAYFORM4 is well defined and equal to 0 if and only if (θ *, ϕ *) is a Nash equilibrium of the game. The two functions g and h are called merit functions (more details on the concept of merit functions in §C). In the following, we call, DISPLAYFORM5 Averaging. Alg. 1 (AvgSGD) presents the stochastic gradient method with averaging, which reduces to the standard (simultaneous) SGD updates for the two-player games used in the GAN literature, but returning an average of the iterates. Theorem 2. Under Assump. 1, 2 and 3, SGD with averaging (Alg. 1) with a constant step-size gives, FORMULA1 is called the variance term. This type of bound is standard in stochastic optimization. We also provide in §F a similarÕ(1/ √ t) rate with an extra log factor when η t = η √ t DISPLAYFORM6. We show that this variance term is smaller than the one of SGD with prediction method in §E.Extrapolations. Alg. 2 (AvgExtraSGD) adds an extrapolation step compared to Alg. 1 in order to reduce the oscillations due to the game between the two players. A theoretical consequence is that it has a smaller variance term than. As discussed previously, Assump. 2 made in Thm. 2 for the convergence of Alg. 1 is very strong in the unbounded setting. One advantage of SGD with extrapolation is that Thm. 3 does not require this assumption. gives, DISPLAYFORM0 Since in practice σ M, the variance term in FORMULA1 is significantly smaller than the one in. To summarize, SGD with extrapolation provides better convergence guarantees but requires two gradient computations and samples per iteration. This motivates our new method, Alg. 3 (AvgPastExtraSGD) which uses extrapolation from the past and achieves the best of both worlds (in theory)., gives that the averaged iterates converge as, DISPLAYFORM1 The bound is similar to the one provided in Thm. 3 but each iteration of Alg. 3 is computationally half the cost of an iteration of Alg. 2. In the previous sections, we presented several techniques that converge for stochastic monotone operators. These techniques can be combined in practice with existing algorithms. We propose to combine them to two standard algorithms used for training deep neural networks: the Adam optimizer and the SGD optimizer . For the Adam optimizer, there are several possible choices on how to update the moments. This choice can lead to different algorithms in practice: for example, even in the unconstrained case, our proposed Adam with extrapolation from the past (Alg. 4) is different from Optimistic Adam BID7 (the moments are updated differently). Note that in the case of a two-player game, the previous convergence can be generalized to gradient updates with a different step-size for each player by simply rescaling the objectives L G and L D by a different scaling factor. A detailed pseudo-code for Adam with extrapolation step (Extra-Adam) is given in Algorithm 4. Note that our interest regarding this algorithm is practical and that we do not provide any convergence proof. Algorithm 4 Extra-Adam: proposed Adam with extrapolation step.input: step-size η, decay rates for moment estimates β 1, β 2, access to the stochastic gradients ∇ t (·) and to the projection DISPLAYFORM0 Sample new mini-batch and compute stochastic gradient: g t ← ∇ t (ω t) Option 2: Extrapolation from the past Load previously saved stochastic gradient: DISPLAYFORM1 Correct the bias for the moments: DISPLAYFORM2 Sample new mini-batch and compute stochastic gradient: g t+1/2 ← ∇ t+1/2 (ω t+1/2) Update estimate of first moment: DISPLAYFORM3 Update estimate of second moment: DISPLAYFORM4 Compute bias corrected for first and second moment: DISPLAYFORM5 2 ) Perform update step from the iterate at time t: DISPLAYFORM6 The extragradient method is a standard algorithm to optimize variational inequalities. This algorithm has been originally introduced by and extended by Nesterov FORMULA1 and. Stochastic versions of the extragradient have been recently analyzed (; ; BID18 for stochastic variational inequalities with bounded constraints. A linearly convergent variance reduced version of the stochastic gradient method has been proposed by for strongly monotone variational inequalities. Extrapolation can also be related to optimistic methods BID5) proposed in the online learning literature (see more details in §3.3). Interesting non-convex were proved, for a new notion of regret minimization, by BID16 and in the context of online learning for GANs by BID13.Several methods to stabilize GANs consist in transforming a zero-sum formulation into a more general game that can no longer be cast as a saddle point problem. This is the case of the non-saturating formulation of GANs BID12 BID8, the DCGANs , the gradient penalty 8 for WGANs BID14. propose an optimization method for GANs based on AltSGD using an additional momentum-based step on the generator. BID7 proposed a method inspired from game theory. suggest to dualize the GAN objective to reformulate it as a maximization problem and propose to add the norm of the gradient in the objective to get a better signal. BID10 analyzed a generalization of the bilinear example with a focus put on the effect of momentum on this problem. They do not consider extrapolation (see §B.3 for more details). Unrolling steps can be confused with extrapolation but is fundamentally different: the perspective is to try to approximate the "true generator objective function" unrolling for K steps the updates of the discriminator and then updating the generator. Regarding the averaging technique, some recent work appear to have already successfully used geometric averaging for GANs in practice, but only briefly mention it . By contrast, the present work formally motivates and justifies the use of averaging for GANs by relating them to the VIP perspective, and sheds light on its underlying intuitions in §3.1. Subsequent to our first preprint, Yazıcı et al. explored averaging empirically in more depth, while Mertikopoulos et al. FORMULA0 also investigated extrapolation, providing asymptotic convergence (i.e. without any rate of convergence) in the context of coherent saddle point. The coherence assumption is slightly weaker than monotonicity. Our goal in this experimental section is not to provide new state-of-the art with architectural improvements or a new GAN formulation, but to show that using the techniques (with theoretical guarantees in the monotone case) that we introduced earlier allows us to optimize standard GANs in a better way. These techniques, which are orthogonal to the design of new formulations of GAN optimization objectives, and to architectural choices, can potentially be used for the training of any type of GAN. We will compare the following optimization algorithms: baselines are SGD and Adam using either simultaneous updates on the generator and on the discriminator (denoted SimAdam and SimSGD) or k updates on the discriminator alternating with 1 update on the generator (denoted AltSGD{k} and AltAdam{k}).9 Variants that use extrapolation are denoted ExtraSGD (Alg. 2) and ExtraAdam (Alg. 4). Variants using extrapolation from the past are PastExtraSGD (Alg. 3) and PastExtraAdam (Alg. 4). We also present using as output the averaged iterates, adding Avg as a prefix of the algorithm name when we use (uniform) averaging. We first test the various stochastic algorithms on a simple (n = 10 3, d = 10 3) finite sum bilinear objective (a monotone operator) constrained to [−1, 1] d: solved by (θ DISPLAYFORM0 Results are shown in FIG2 . We can see that AvgAltSGD1 and AvgPastExtraSGD perform the best on this task. We evaluate the proposed techniques in the context of GAN training, which is a challenging stochastic optimization problem where the objectives of both players are non-convex. We propose to evaluate the Adam variants of the different optimization algorithms (see Alg. 4 for Adam with extrapolation) by training two different architectures on the CIFAR10 dataset (Right: WGAN-GP trained on CIFAR10: mean and standard deviation of the inception score computed over 5 runs for each method using the best performing learning rates; all experiments were run on a NVIDIA Quadro GP100 GPU. We see that ExtraAdam converges faster than the Adam baselines.2016) with the WGAN objective and weight clipping as proposed by. Then, we compare the different methods on a state-of-the-art architecture by training a ResNet with the WGAN-GP objective similar to BID14. Models are evaluated using the inception score (IS) computed on 50,000 samples. We also provide the FID BID17 and the details on the ResNet architecture in §G.3.For each algorithm, we did an extensive search over the hyperparameters of Adam. We fixed β 1 = 0.5 and β 2 = 0.9 for all methods as they seemed to perform well. We note that as proposed by BID17, it is quite important to set different learning rates for the generator and discriminator. Experiments were run with 5 random seeds for 500,000 updates of the generator. Tab. 1 reports the best IS achieved on these problems by each considered method. We see that the techniques of extrapolation and averaging consistently enable improvements over the baselines (see §G.5 for more experiments on averaging). FIG3 shows training curves for each method (for their best performing learning rate), as well as samples from a ResNet generator trained with ExtraAdam on a WGAN-GP objective. For both tasks, using an extrapolation step and averaging with Adam (ExtraAdam) outperformed all other methods. Combining ExtraAdam with averaging yields that improve significantly over the previous state-of-the-art IS (8.2) and FID (21.7) on CIFAR10 as reported by Miyato et al. FORMULA0 (see Tab. 5 for FID). We also observed that methods based on extrapolation are less sensitive to learning rate tuning and can be used with higher learning rates with less degradation; see §G.4 for more details. We newly addressed GAN objectives in the framework of variational inequality. We tapped into the optimization literature to provide more principled techniques to optimize such games. We leveraged these techniques to develop practical optimization algorithms suitable for a wide range of GAN training objectives (including non-zero sum games and projections onto constraints). We experimentally verified that this could yield better trained models, improving the previous state of the art. The presented techniques address a fundamental problem in GAN training in a principled way, and are orthogonal to the design of new GAN architectures and objectives. They are thus likely to be widely applicable, and benefit future development of GANs. In this section, we recall usual definitions and lemmas from convex analysis. We start with the definitions and lemmas regarding the projection mapping. A.1 PROJECTION MAPPING Definition 1. The projection P Ω onto Ω is defined as, DISPLAYFORM0 When Ω is a convex set, this projection is unique. This is a consequence of the following lemma that we will use in the following sections: the non-expansiveness of the projection onto a convex set. Lemma 1. Let Ω a convex set, the projection mapping DISPLAYFORM1 This is standard convex analysis which can be found for instance in BID2. The following lemma is also standard in convex analysis and its proof uses similar arguments as the proof of Lemma 1.Lemma 2. Let ω ∈ Ω and ω DISPLAYFORM2, then for all ω ∈ Ω we have, DISPLAYFORM3 Proof of Lemma 2. We start by simply developing, DISPLAYFORM4 Then since ω + is the projection onto the convex set Ω of ω + u, we have that DISPLAYFORM5 leading to the of the Lemma. Another important property used is the Lipschitzness of an operator. DISPLAYFORM0 In this paper, we also use the notion of strong monotonicity, which is a generalization for operators of the notion of strong convexity. Let us first recall the definition of the latter, DISPLAYFORM1 If a function f (resp. L) is strongly convex (resp. strongly convex-concave), its gradient ∇f (resp. DISPLAYFORM2 Definition 5. For µ > 0, an operator F : Ω → R d is said to be µ-strongly monotone if DISPLAYFORM3 In this section, we will prove the provided in §3, namely Proposition 1, Proposition 2 and Theorem 1. For Proposition 1 and 2, let us recall the context. We wanted to derive properties of some gradient methods on the following simple illustrative example DISPLAYFORM0 B.1 PROOF OF PROPOSITION 1Let us first recall the proposition:Proposition' 1. The simultaneous iterates diverge geometrically and the alternating iterates defined in are bounded but do not converge to 0 as DISPLAYFORM1 The uniform average (θ t,φ t) def = 1 t t−1 s=0 (θ s, φ s) of the simultaneous updates (resp. the alternating updates) diverges (resp. converges to 0) as, DISPLAYFORM2 Proof. Let us start with the simultaneous update rule: DISPLAYFORM3 Then we have, DISPLAYFORM4 The update rule also gives us, DISPLAYFORM5 Summing FORMULA1 for 0 ≤ t ≤ T − 1 to get telescoping sums, we get DISPLAYFORM6 Let us continue with the alternating update rule DISPLAYFORM7 Then we have, DISPLAYFORM8 By simple linear algebra, for η < 2, the matrix M def = 1 −η η 1 − η 2 has two complex conjugate eigenvalues which are DISPLAYFORM9 and their squared magnitude is equal to det(M) = 1 − η 2 + η 2 = 1. We can diagonalize M meaning that there exists P an invertible matrix such that M = P −1 diag(λ +, λ −)P. Then, we have DISPLAYFORM10 and consequently, DISPLAYFORM11 where · C is the norm in C 2 and P:= max u∈C 2 P u C u C is the induced matrix norm. The same way we have, DISPLAYFORM12 Hence, if θ 2 0 + φ 2 0 > 0, the sequence (θ t, φ t) is bounded but do not converge to 0. Moreover the update rule gives us, DISPLAYFORM13 Consequently, since θ DISPLAYFORM14 In this section, we will prove a slightly more precise proposition than Proposition 2, Proposition' 2. The squared norm of the iterates N 2 t def = θ 2 t + φ 2 t, where the update rule of θ t and φ t is defined in, decrease geometrically for any 0 < η < 1 as, DISPLAYFORM0 Proof. Let us recall the update rule for the implicit method DISPLAYFORM1 Then, DISPLAYFORM2 implying that DISPLAYFORM3 which is valid for any η. For the extrapolation method, we have the update rule DISPLAYFORM4 Implying that, DISPLAYFORM5 DISPLAYFORM6 In this section, we will show how to simply extend the study of the algorithm of interest provided in §3 on the general unconstrained bilinear example, DISPLAYFORM0 where, A ∈ R d×p, b ∈ R d and c ∈ R p. The only assumption we will make is that this problem is feasible which is equivalent to say that there exists a solution (θ *, ϕ *) to the system DISPLAYFORM1 In this case, we can re-write as DISPLAYFORM2 where c:= −θ * Aϕ * is a constant that does not depend on θ and ϕ.First, let us show that we can reduce the study of simultaneous, alternating, extrapolation and implicit updates rules for to the study of the respective unidimensional updates and.This reduction has already been proposed by BID10. For completeness, we reproduce here similar arguments. The following lemma is a bit more general than the provided by BID10. It states that the study of a wide class of unconstrained first order method on can be reduced to the study of the method on, with potentially rescaled step-sizes. Before explicitly stating the lemma, we need to introduce a bit of notation to encompass easily our several methods in a unified way. First, we let ω t:= (θ t, ϕ t), where the index t here is a more general index which can vary more often than the one in §3. For example, for the extrapolation method, we could consider ω 1 = ω 0+1/2 and ω 2 = ω 1, where ω was the sequence defined for the extragradient. For the alternated updates, we can consider ω 1 = (θ 1, ϕ 0) and ω 2 = (θ 1, ϕ 1) (this also defines θ 2 = θ 1), where θ and ϕ were the sequences originally defined for alternated updates. We are thus ready to state the lemma. Lemma 3. Let us consider the following very general class of first order methods on, i.e., DISPLAYFORM3 where ω t:= (θ t, ϕ t) and F θ (ω t):= Aϕ t − b, F ϕ (ω t) = A θ t − c. Then, we have DISPLAYFORM4 where A = U DV (SVD decomposition) and the couples Proof. Our general class of first order methods can be written with the following update rules: DISPLAYFORM5 where λ it, µ it ∈ R, 0 ≤ i ≤ t + 1. We allow the dependence on t for the algorithm coefficients λ and µ (for example, the alternating rule would zero out some of the coefficients depending on whether we are updating θ or ϕ at the current iteration). Notice also that if both λ (t+1)t and µ (t+1)t are non-zero, we have an implicit scheme. Thus, using the SVD of A = U DV, we get DISPLAYFORM6 which is equivalent to DISPLAYFORM7 where D is a rectangular matrix with zeros except on a diagonal block of size r. Thus, each coordinate ofθ t+1 andφ t+1 are updated independently, reducing the initial problem to r unidimensional problems, DISPLAYFORM8 where σ 1 ≥... ≥ σ r > 0 are the positive diagonal coefficients of D. Note that the only additional restriction is that the coefficients (λ st) and (σ st) (that are the same for 1 ≤ i ≤ r) are rescaled by the singular values of A. In practice, for our methods of interest with a step-size η, it corresponds to the study of r unidimensional problem with a respective step-size DISPLAYFORM9 From this lemma, an extension of Proposition 1 and 2 directly follows to the general unconstrained bilinear objective. We note DISPLAYFORM10 where (Θ *, Φ *) is the set of solutions of. The following corollary is divided in two points, the first point is a from BID10 (note that the on the average is a straightforward extension of the one provided in Proposition 1 and was not provided by BID10), the second is new. Very similar asymptotic upper bounds regarding extrapolation and implicit methods can be derived by computing the exact values of the constant τ 1 and τ 2 (and noticing that τ 3 = ∞) introduced in (, Eq. 3 & 4) for the unconstrained bilinear case. However, since works in a very general setting, the bound are not as tight as ours and his proof technique is a bit more technical. Our reduction above provides here a simple proof for our simple setting. • Gidel et al. FORMULA0: The simultaneous iterates diverge geometrically and the alternating iterates are bounded but do not converge to 0 as, Simultaneous: φ s ) of the simultaneous updates (resp. the alternating updates) diverges (resp. converges to 0) as, DISPLAYFORM0 DISPLAYFORM1 • Extrapolation and Implicit method: The iterates respectively generated by the update rules FORMULA0 and FORMULA0 on a bilinear unconstrained problem do converge linearly for any 0 < η < 1 σmax(A) at a rate, DISPLAYFORM2 Particularly, for η = 1 2σmax(A) we get for the extrapolation method, DISPLAYFORM3 where κ:= 2 σmin(A) 2 is the condition number of A A. Let us recall what we call projected extrapolation form the past, where we used the notation ω t = ω t+1/2 for compactness, Extrapolation from the past: DISPLAYFORM0 Perform update step: DISPLAYFORM1 where P Ω [·] is the projection onto the constraint set Ω. An operator F: Ω → R d is said to be µ-strongly monotone if DISPLAYFORM2 If F is strongly monotone, we can prove the following theorem:Theorem' 1. If F is µ-strongly monotone (see §A for the definition of strong monotonicity) and L-Lipschitz, then the updates FORMULA1 and FORMULA0 with η = 1 4L provide linearly converging iterates, DISPLAYFORM3 Proof. In order to prove this theorem, we will prove a slightly more general , DISPLAYFORM4 with the convention that ω 0 = ω −1 = ω −2. It implies that DISPLAYFORM5 Let us first proof three technical lemmas.11 As before, the inequality for the implicit scheme is actually valid for any step-size. Lemma 4. If F is µ-strongly monotone, we have DISPLAYFORM6 Proof. By strong monotonicity and optimality of ω *, DISPLAYFORM7 and then we use the inequality 2 ω t − ω * 2 2 ≥ ω t − ω * 2 2 − 2 ω t − ω t 2 2 to get the claimed. DISPLAYFORM8 and DISPLAYFORM9 Summing FORMULA3 and FORMULA4 we get, DISPLAYFORM10 DISPLAYFORM11 Then, we can use the Young's inequality 2a DISPLAYFORM12 Lemma 6. For all t ≥ 0, if we set ω −2 = ω −1 = ω 0 we have DISPLAYFORM13 Proof. We start with a + b 2 2 ≤ 2 a 2 + 2 b 2. DISPLAYFORM14 Moreover, since the projection is contractive we have that DISPLAYFORM15 Combining FORMULA1 and FORMULA4 we get, DISPLAYFORM16 DISPLAYFORM17 Proof of Theorem 1. Let ω * ∈ Ω * be an optimal point of (VIP). Combining Lemma 4 and Lemma 5 we get, DISPLAYFORM18 leading to, DISPLAYFORM19 Now using Lemma 6 we get, DISPLAYFORM20 Now with η t = 1 4L ≤ 1 4µ we get, DISPLAYFORM21 Hence, using the fact that DISPLAYFORM22 we get, DISPLAYFORM23 In this section, we will present how to handle an unbounded constraint set Ω with a more refined merit function than used in the main paper. Let F be the continuous operator and Ω be the constraint set associated with the VIP, DISPLAYFORM24 When the operator F is monotone, we have that DISPLAYFORM25 Hence, in this case (VIP) implies a stronger formulation sometimes called Minty variational inequality BID6: DISPLAYFORM26 This formulation is stronger in the sense that if (MVI) holds for some ω * ∈ Ω, then (VIP) holds too. A merit function useful for our analysis can be derived from this formulation. Roughly, a merit function is a convergence measure. More formally, a function g: Ω → R is called a merit function if g is non-negative such that g(ω) = 0 ⇔ ω ∈ Ω * . A way to derive a merit function from (MVI) would be to use g(ω *) = sup ω∈Ω F (ω) (ω * − ω) which is zero if and only if (MVI) holds for ω *. To deal with unbounded constraint sets (leading to a potentially infinite valued function outside of the optimal set), we use the restricted merit function : = Ω ∩ {ω : ω − ω 0 < R}. Then for any pointω ∈ Ω R, we have: DISPLAYFORM27 DISPLAYFORM28 The reference point ω 0 is arbitrary, but in practice it is usually the initialization point of the algorithm. R has to be big enough to ensure that Ω R contains a solution. Err R measures how much (MVI) is violated on the restriction Ω R. Such merit function is standard in the variational inequality literature. A similar one is used in . When F is derived from the gradients of a zero-sum game, we can define a more interpretable merit function. One has to be careful though when extending properties from the minimization setting to the saddle point setting (e.g. the merit function used by Yadav et al. FORMULA0 is vacuous for a bilinear game as explained in App C.2).In the appendix, we adopt a set of assumptions a little more general than the one in the main paper: DISPLAYFORM29 • F is monotone and Ω is convex and closed.• R is set big enough such that R > ω 0 − ω * and F is a monotone operator. Contrary to Assumption 3, in Assumption 4 the constraint set in no longer assumed to be bounded. Assumption 4 is implied by Assumption 3 by setting R to the diameter of Ω, and is thus more general. In this appendix, we will note Err (VI) R the restricted merit function defined in. Let us recall its definition, Err DISPLAYFORM0 When the objective is a saddle point problem i.e., DISPLAYFORM1 and L is convex-concave (see Definition 4 in §A), we can use another merit function than FORMULA0 on Ω R that is more interpretable and more directly related to the cost function of the minimax formulation: DISPLAYFORM2 In particular, if the equilibrium (θ *, ϕ *) ∈ Ω * ∩ Ω R and we have that L(·, ϕ *) and −L(θ *, ·) are µ-strongly convex (see §A), then the merit function for saddle points upper bounds the distance for (θ, ϕ) ∈ Ω R to the equilibrium as: DISPLAYFORM3 In the appendix, we provide our convergence with the merit functions and, depending on the setup: DISPLAYFORM4 In this section, we illustrate the fact that one has to be careful when extending and properties from the minimization setting to the minimax setting (and consequently to the variational inequality setting). Another candidate as a merit function for saddle point optimization would be to naturally extend the suboptimality f (ω) − f (ω *) used in standard minimization (i.e. find ω * the minimizer of f) to the gap DISPLAYFORM0 In a previous analysis of a modification of the stochastic gradient descent (SGD) method for gave their convergence rate on P that they called the "primal-dual" gap. Unfortunately, if we do not assume that the function L is strongly convex-concave (a stronger assumption defined in §A and which fails for bilinear objective e.g.), P may not be a merit function. It can be 0 for a non optimal point, see for instance the discussion on the differences between and P in (, Section 3). In particular, for the simple 2D bilinear example L(θ, ϕ) = θ · ϕ, we have that θ * = ϕ * = 0 and thus P (θ, ϕ) = 0 ∀θ, ϕ. When the cost functions defined in are non-convex, the operator F is no longer monotone. Nevertheless, (VIP) and (MVI) can still be defined, though a solution to (MVI) is less likely to exist. We note that (VIP) is a local condition for F (as only evaluating F at the points ω *). On the other hand, an appealing property of (MVI) is that it is a global condition. In the context of minimization of a function f for example (where F = ∇f), if ω * solves (MVI) then ω * is a global minimum of f (and not just a stationary point for the solution of (MVI); see Proposition 2.2 from BID6 ).A less restrictive way to consider variational inequalities in the non-monotone setting is to use a local version of (MVI). If the cost functions are locally convex around the optimal couple (θ *, ϕ *) and if our iterates eventually fall and stay into that neighborhood, then we can consider our restricted merit function Err R (·) with a well suited constant R and apply our convergence for monotone operators. We now introduce another way to combine extrapolation and SGD. This extension is very similar to AvgExtraSGD Alg. 2, the only difference is that it re-uses the mini-batch sample of the extrapolation step for the update of the current point. The intuition is that it correlates the estimator of the gradient of the extrapolation step and the one of the update step leading to a better correction of the oscillations which are also due to the stochasticity. One emerging issue (for the analysis) of this method is that since ω t depend on ξ t, the quantity F (ω t, ξ t) is a biased estimator of F (ω t).Algorithm 5 Re-used mini-batches for stochastic extrapolation (ReExtraSGD) DISPLAYFORM0 Sample ξ t ∼ P 4: DISPLAYFORM1 Extrapolation step 5: DISPLAYFORM2 Update step with the same sample 6: end for has the following convergence properties: DISPLAYFORM3 DISPLAYFORM4 The assumption that the sequence of the iterates provided by the algorithm is bounded is strong, but has also been made for instance in . The proof of this is provided in §F. To compare the variance term of AvgSGD in FORMULA1 with the one of the SGD with prediction method , we need to have the same convergence certificate. Fortunately, their proof can be adapted to our convergence criterion (using Lemma 7 in §F), revealing an extra σ 2 /2 in the variance term from their paper. The ing variance can be summarized with our notation as DISPLAYFORM0 where the L is the Lipschitz constant of the operator F. Since M σ, their variance term is then 1 + L time larger than the one provided by the AvgSGD method. This section is dedicated on the proof of the theorems provided in this paper in a slightly more general form working with the merit function defined in. First we prove an additional lemma necessary to the proof of our theorems. Lemma 7. Let F be a monotone operator and let (ω t), (ω t), (z t), (∆ t), (ξ t) and (ζ t) be six random sequences such that, for all t ≥ 0 2η t F (ω t) (ω t − ω) ≤ N t − N t+1 + η where N t = N (ω t, ω t−1, ω t−2) ≥ 0 and we extend (ω t) with ω −2 = ω −1 = ω 0. Let also assume that with DISPLAYFORM0 Proof of Lemma 7. We sum for 0 ≤ t ≤ T − 1 to get, DISPLAYFORM1 We will then upper bound each sum in the right-hand side, DISPLAYFORM2 where u t+1 DISPLAYFORM3 Then noticing that z 0 def = ω 0, back to we get a telescoping sum, DISPLAYFORM4 If F is the operator of a convex-concave saddle point, we get, with ω t = (θ t, ϕ t) DISPLAYFORM5 then by convexity of L(·, ϕ) and concavity of L(θ, ·), we have that, DISPLAYFORM6 In both cases, we can now maximize the left hand side respect to ω (since the RHS does not depend on ω) to get, DISPLAYFORM7 Then taking the expectation, since E[∆ t |z t, DISPLAYFORM8 Published as a conference paper at ICLR 2019 DISPLAYFORM9 First let us state Theorem 2 in its general form, Theorem' 2. Under Assumption 1, 2 and 4, Alg. 1 with constant step-size η has the following convergence rate for all T ≥ 1, DISPLAYFORM10 DISPLAYFORM11 Proof of Theorem 2. Let any ω ∈ Ω such that ω 0 − ω 2 ≤ R, DISPLAYFORM12 (projections are non-contractive, Lemma 1) DISPLAYFORM13 Then we can make appear the quantity F (ω t) (ω t − ω) on the left-hand side, DISPLAYFORM14 we can sum for 0 ≤ t ≤ T − 1 to get, DISPLAYFORM15 where we noted DISPLAYFORM16 DISPLAYFORM17 We will then upper bound each sum in the right hand side, DISPLAYFORM18 where u t+1 def = P Ω (u t − η t ∆ t) and u 0 = ω 0. Then, DISPLAYFORM19 Then noticing that u 0 def = ω 0, back to we get a telescoping sum, DISPLAYFORM20 Then the right hand side does not depends on ω, we can maximize over ω to get, DISPLAYFORM21 Noticing that E[∆ t |ω t, u t] = 0 (the estimates of F are unbiased), by Assumption 2 DISPLAYFORM22 particularly for η t = η and η t = η √ t+1we respectively get, DISPLAYFORM23 and DISPLAYFORM24 Theorem' 3. Under Assumption 1 and 4, if DISPLAYFORM25 has the following convergence rate for any T ≥ 1, DISPLAYFORM26 DISPLAYFORM27 Proof of Thm. 3. Let any ω ∈ Ω such that ω 0 − ω 2 ≤ R. Then, the update rules become ω t+1 = P Ω (ω t − η t F (ω t, ζ t)) and ω t = P Ω (ω t − ηF (ω t, ξ t)). We start by applying Lemma 2 for (ω, u, ω, ω +) = (ω t, −ηF (ω t, ζ t), ω, ω t+1 ) and (ω, u, ω, ω DISPLAYFORM28 Then, summing them we get DISPLAYFORM29 Using the inequality DISPLAYFORM30 Then we can use the L-Lipschitzness of F to get, DISPLAYFORM31 As we restricted the step-size to η t ≤ 1 √ 3Lwe get, DISPLAYFORM32 We get a particular case of so we can use Lemma 7 where DISPLAYFORM33 By Assumption 1, M 1 = M 2 = 3σ 2 and by the fact that DISPLAYFORM34 the hypothesis of Lemma 7 hold and we get, DISPLAYFORM35 has the following convergence rate for any T ≥ 1, DISPLAYFORM36 DISPLAYFORM37 First let us recall the update rule DISPLAYFORM38 Lemma 8. We have for any ω ∈ Ω, DISPLAYFORM39 Proof. Applying Lemma 2 for (ω, u, ω DISPLAYFORM40 and DISPLAYFORM41 Summing FORMULA0 and FORMULA0 we get, DISPLAYFORM42 DISPLAYFORM43 DISPLAYFORM44 Then, we can use the inequality of arithmetic and geometric means 2a DISPLAYFORM45 Using the inequality a DISPLAYFORM46 where we used the L-Lipschitzness of F for the last inequality. Combining FORMULA0 with FORMULA0 we get, DISPLAYFORM47 Lemma 9. For all t ≥ 0, if we set ω −2 = ω −1 = ω 0 we have DISPLAYFORM48 Proof. We start with a + b 2 2 ≤ 2 a 2 + 2 b 2. DISPLAYFORM49 Moreover, since the projection is contractive we have that DISPLAYFORM50 where in the last line we used the same inequality as in. Combining FORMULA0 and FORMULA0 we get, DISPLAYFORM51 Proof of Theorem 4. Combining Lemma 9 and Lemma 8 we get, DISPLAYFORM52 Then for DISPLAYFORM53 we have 36η DISPLAYFORM54 We can then use Lemma 7 where DISPLAYFORM55 We now consider a task similar to where the discriminator is linear D ϕ (ω) = ϕ T ω, the generator is a Dirac distribution at θ, q θ = δ θ and the distribution we try to match is also a Dirac at ω *, p = δ ω *. The minimax formulation from BID12 gives: min DISPLAYFORM0 Note that as observed by , this objective is concave-concave, making it hard to optimize. We compare the methods on this objective where we take ω * = −2, thus the position of the equilibrium is shifted towards the position (θ, ϕ) = (−2, 0). The convergence and the gradient vector field are shown in FIG7. We observe that depending on the initialization, some methods can fail to converge but extrapolation seems to perform better than the other methods. In addition to the presented in section §7.2, we also trained the DCGAN architecture with the WGAN-GP objective. The are shown in Table 3. The best are achieved with uniform averaging of AltAdam5. However, its iterations require to update the discriminator 5 times for every generator update. With a small drop in best final score, ExtraAdam can train WGAN-GP significantly faster (see Fig. 6 right) as the discriminator and generator are updated only twice. In addition to the inception scores, we also computed the FID scores BID17 ) using 50,000 samples for the ResNet architecture with the WGAN-GP objective; the are presented in TAB7. We see that the and are similar to the one obtained from the inception scores, adding an extrapolation step as well as using Exponential Moving Average (EMA) consistently improves the FID scores. However, contrary to the from the inception score, we observe that uniform averaging does not necessarily improve the performance of the methods. This could be due to the fact that the samples produced using uniform averaging are more blurry and FID is more sensitive to blurriness; see §G.3 for more details about the effects of uniform averaging. Figure 6: DCGAN architecture with WGAN-GP trained on CIFAR10: mean and standard deviation of the inception score computed over 5 runs for each method using the best performing learning rate plotted over number of generator updates (Left) and wall-clock time (Right); all experiments were run on a NVIDIA Quadro GP100 GPU. We see that ExtraAdam converges faster than the Adam baselines. Number of generator updates Figure 7: Inception score on CIFAR10 for WGAN-GP (DCGAN) over number of generator updates for different learning rates. We can see that AvgExtraAdam is less sensitive to the choice of learning rate. In this section, we compare how the methods presented in §7 perform with the same step-size. We follow the same protocol as in the experimental section §7, we consider the DCGAN architecture with WGAN-GP experiment described in App §G.2. In Figure 7 we plot the inception score provided by each training method as a function of the number of generator updates. Note that these plots advantage AltAdam5 a bit because each iteration of this algorithm is a bit more costly (since it perform 5 discriminator updates for each generator update). Nevertheless, the goal of this experiment is not to show that AltAdam5 is faster but to show that ExtraAdam is less sensitive to the choice of learning rate and can be used with higher learning rates with less degradation. In FIG11, we compare the sample quality on the DCGAN architecture with the WGAN-GP objective of AltAdam5 and AvgExtraAdam for different step-sizes. We notice that for AvgExtraAdam, the sample quality does not significantly change whereas the sample quality of AltAdam5 seems to be really sensitive to step-size tunning. We think that robustness to step-size tuning is a key property for an optimization algorithm in order to save as much time as possible to tune other hyperparameters of the learning procedure such as regularization. In this section, we compare how uniform averaging affect the performance of the methods presented in §7. We follow the same protocol as in the experimental section §7, we consider the DCGAN architecture with the WGAN and weight clipping objective as well as the WGAN-GP objective. In Figure 9 and 10, we plot the inception score provided by each training method as a function of the number of generator updates with and without uniform averaging. We notice that uniform averaging seems to improve the inception score, nevertheless it looks like the sample are a bit more blurry (see FIG1). This is confirmed by our FIG1 Number of generator updates Number of generator updates Figure 12: The Fréchet Inception Distance (FID) from BID17 computed using 50,000 samples, on the WGAN experiments. ReExtraAdam refers to Alg. 5 introduced in §D. We can see that averaging performs worse than when comparing with the Inception Score. We observed that the samples generated by using averaging are a little more blurry and that the FID is more sensitive to blurriness, thus providing an explanation for this observation. | We cast GANs in the variational inequality framework and import techniques from this literature to optimize GANs better; we give algorithmic extensions and empirically test their performance for training GANs. | 634 | scitldr |
In order to efficiently learn with small amount of data on new tasks, meta-learning transfers knowledge learned from previous tasks to the new ones. However, a critical challenge in meta-learning is the task heterogeneity which cannot be well handled by traditional globally shared meta-learning methods. In addition, current task-specific meta-learning methods may either suffer from hand-crafted structure design or lack the capability to capture complex relations between tasks. In this paper, motivated by the way of knowledge organization in knowledge bases, we propose an automated relational meta-learning (ARML) framework that automatically extracts the cross-task relations and constructs the meta-knowledge graph. When a new task arrives, it can quickly find the most relevant structure and tailor the learned structure knowledge to the meta-learner. As a , the proposed framework not only addresses the challenge of task heterogeneity by a learned meta-knowledge graph, but also increases the model interpretability. We conduct extensive experiments on 2D toy regression and few-shot image classification and the demonstrate the superiority of ARML over state-of-the-art baselines. Learning quickly is the key characteristic of human intelligence, which remains a daunting problem in machine intelligence. The mechanism of meta-learning is widely used to generalize and transfer prior knowledge learned from previous tasks to improve the effectiveness of learning on new tasks, which has benefited various applications, such as computer vision (;, natural language processing and social good (; a). Most of existing meta-learning algorithms learn a globally shared meta-learner (e.g., parameter initialization (;, meta-optimizer , metric space (; ;) ). However, globally shared meta-learners fail to handle tasks lying in different distributions, which is known as task heterogeneity (; b). Task heterogeneity has been regarded as one of the most challenging issues in meta-learning, and thus it is desirable to design meta-learning models that effectively optimize each of the heterogeneous tasks. The key challenge to deal with task heterogeneity is how to customize globally shared meta-learner by using task-specific information? Recently, a handful of works try to solve the problem by learning a task-specific representation for tailoring the transferred knowledge to each task (; ;). However, the expressiveness of these methods is limited due to the impaired knowledge generalization between highly related tasks. Recently, learning the underlying structure across tasks provides a more effective way for balancing the customization and generalization. Representatively, Yao et al. propose a hierarchically structured meta-learning method to customize the globally shared knowledge to each cluster (b). Nonetheless, the hierarchical clustering structure completely relies on the handcrafted design which needs to be tuned carefully and may lack the capability to capture complex relationships. Hence, we are motivated to propose a framework to automatically extract underlying relational structures from historical tasks and leverage those relational structures to facilitate knowledge customization on a new task. This inspiration comes from the way of structuring knowledge in knowledge bases (i.e., knowledge graphs). In knowledge bases, the underlying relational structures across text entities are automatically constructed and applied to a new query to improve the searching efficiency. In the meta-learning problem, similarly, we aim at automatically establishing the metaknowledge graph between prior knowledge learned from previous tasks. When a new task arrives, it queries the meta-knowledge graph and quickly attends to the most relevant entities (vertices), and then takes advantage of the relational knowledge structures between them to boost the learning effectiveness with the limited training data. The proposed meta-learning framework is named as Automated Relational Meta-Learning (ARML). Specifically, the ARML automatically builds the meta-knowledge graph from meta-training tasks to memorize and organize learned knowledge from historical tasks, where each vertex represents one type of meta-knowledge (e.g., the common contour between birds and aircrafts). To learn the meta-knowledge graph at meta-training time, for each task, we construct a prototype-based relational graph for each class, where each vertex represents one prototype. The prototype-based relational graph not only captures the underlying relationship behind samples, but alleviates the potential effects of abnormal samples. The meta-knowledge graph is then learned by summarizing the information from the corresponding prototype-based relational graphs of meta-training tasks. After constructing the meta-knowledge graph, when a new task comes in, the prototype-based relational graph of the new task taps into the meta-knowledge graph for acquiring the most relevant knowledge, which further enhances the task representation and facilitates its training process. Our major contributions of the proposed ARML are three-fold: it automatically constructs the meta-knowledge graph to facilitate learning a new task; it empirically outperforms the state-ofthe-art meta-learning algorithms; the meta-knowledge graph well captures the relationship among tasks and improves the interpretability of meta-learning algorithms. Meta-learning designs models to learn new tasks or adapt to new environments quickly with a few training examples. There are mainly three research lines of meta-learning: black-box amortized methods design black-box meta-learners to infer the model parameters (; ; ;); gradient-based methods aim to learn an optimized initialization of model parameters, which can be adapted to new tasks by a few steps of gradient descent (; ; ;); non-parametric methods combine parametric meta-learners and non-parametric learners to learn an appropriate distance metric for few-shot classification (; ; ; ; ;). Our work is built upon the gradient-based meta-learning methods. In the line of gradient-based meta-learning, most algorithms learn a globally shared meta-learners from previous tasks (; ;), to improve the effectiveness of learning process on new tasks. However, these algorithms typically lack the ability to handle heterogeneous tasks (i.e., tasks sample from sufficient different distributions). To tackle this challenge, recent works tailor the globally shared initialization to different tasks by customizing initialization (; b) and using probabilistic models (; . Representatively, HSML customizes the globally shared initialization with a manually designed hierarchical clustering structure to balance the generalization and customization (b). However, the handcrafted designed hierarchical structure may not accurately reflect the real structure and the clustering structure constricts the complexity of relationship. Compared with these methods, ARML leverages the most relevant structure from the automatically constructed meta-knowledge graph. Thus, ARML not only discovers more accurate underlying structures to improve the effectiveness of meta-learning algorithms, but also the meta-knowledge graph further enhances the model interpretability. Few-shot Learning Considering a task Ti, the goal of few-shot learning is to learn a model with a dataset Di = {D, θ), and obtain the optimal parameters θi. For the regression problem, the loss function is defined based on the mean square error (i.e., (x j,y j)∈D tr i f θ (xj)−yj 2 2 ) and for the classification problem, the loss function uses the cross entropy loss (i.e., − (x j,y j)∈D tr i log p(yj|xj, f θ)). Usually, optimizing and learning parameter θ for the task Ti with a few labeled training samples is difficult. To address this limitation, meta-learning provides us a new perspective to improve the performance by leveraging knowledge from multiple tasks. Meta-learning and Model-agnostic Meta-learning In meta-learning, a sequence of tasks {T1, ..., TI} are sampled from a task-level probability distribution p(T), where each one is a few-shot learning task. To facilitate the adaption for incoming tasks, the meta-learning algorithm aims to find a well-generalized meta-learner on I training tasks at meta-learning phase. At meta-testing phase, the optimal meta-learner is applied to adapt the new tasks Tt. In this way, meta-learning algorithms are capable of adapting to new tasks efficiently even with a shortage of training data for a new task. Model-agnostic meta-learning (MAML) , one of the representative algorithms in gradient-based meta-learning, regards the meta-learner as the initialization of parameter θ, i.e., θ0, and learns a well-generalized initialization θ * 0 during the meta-training process. The optimization problem is formulated as (one gradient step as exemplary): At the meta-testing phase, to obtain the adaptive parameter θt for each new task Tt, we finetune the initialization of parameter θ * 0 by performing gradient updates a few steps, i.e., In this section, we introduce the details of the proposed ARML. To better explain how it works, we show its framework in Figure 1. The goal of ARML is to facilitate the learning process of new tasks by leveraging transferable knowledge learned from historical tasks. To achieve this goal, we introduce a meta-knowledge graph, which is automatically constructed at the meta-training time, to organize and memorize historical learned knowledge. Given a task, which is built as a prototypebased relational structure, it taps into the meta-knowledge graph to acquire relevant knowledge for enhancing its own representation. The enhanced prototype representations further aggregate and incorporate with meta-learner for fast and effective adaptions by utilizing a modulating function. In the following subsections, we elaborate three key components: prototype-based sample structuring, automated meta-knowledge graph construction and utilization, and task-specific knowledge fusion and adaptation, respectively. Given a task which involves either classifications or regressions regarding a set of samples, we first investigate the relationships among these samples. Such relationship is represented by a graph, called prototype-based relational graph in this work, where the vertices in the graph denote the prototypes of different classes while the edges and the corresponding edge weights are created based on the similarities between prototypes. Constructing the relational graph based on prototypes instead of raw samples allows us to alleviate the issue raised by abnormal samples. As the abnormal samples, which locate far away from normal samples, could pose significant concerns especially when only a limited number of samples are available for training. Specifically, for classification problem, the prototype, denoted by c, is defined as: where N tr k denotes the number of samples in class k. E is an embedding function, which projects xj into a hidden space where samples from the same class are located closer to each other while samples from different classes stay apart. For regression problem, it is not straightforward to construct Figure 1: The framework of ARML. For each task T i, ARML first builds a prototype-based relational structure R i by mapping the training samples D tr i into prototypes, with each prototype represents one class. Then, R i interacts with the meta-knowledge graph G to acquire the most relevant historical knowledge by information propagation. Finally, the task-specific modulation tailors the globally shared initialization θ 0 by aggregating of raw prototypes and enriched prototypes, which absorbs relevant historical information from the meta-knowledge graph. the prototypes explicitly based on class information. Therefore, we cluster samples by learning an assignment matrix Pi ∈ R K×N tr. Specifically, we formulate the process as: where Pi[k] represents the k-th row of Pi. Thus, training samples are clustered to K clusters, which serve as the representation of prototypes. After calculating all prototype representations {c k i |∀k ∈ [1, K]}, which serve as the vertices in the the prototype-based relational graph Ri, we further define the edges and the corresponding edge weights. The edge weight A Ri (c where Wr and br represents learnable parameters, γr is a scalar and σ is the Sigmoid function, which normalizes the weight between 0 and 1. For simplicity, we denote the prototype-based relational graph K×d represent a set of vertices, with each one corresponds to the prototype from a class, while gives the adjacency matrix, which indicates the proximity between prototypes. In this section, we first discuss how to organize and distill knowledge from historical learning process and then expound how to leverage such knowledge to benefit the training of new tasks. To organize and distill knowledge from historical learning process, we construct and maintain a meta-knowledge graph. The vertices represent different types of meta-knowledge (e.g., the common contour between aircrafts and birds) and the edges are automatically constructed to reflect the relationship between meta-knowledge. When serving a new task, we refer to the meta-knowledge, which allows us to efficiently and automatically identify relational knowledge from previous tasks. In this way, the training of a new task can benefit from related training experience and get optimized much faster than otherwise possible. In this paper, the meta-knowledge graph is automatically constructed at the meta-training phase. The details of the construction are elaborated as follows: Assuming the representation of an vertex g is given by h g ∈ R d, we define the meta-knowledge graph as G = (HG, AG), where HG = {h G×G denote the vertex feature matrix and vertex adjacency matrix, respectively. To better explain the construction of the meta-knowledge graph, we first discuss the vertex representation HG. During meta-training, tasks arrive one after another in a sequence and their corresponding vertices representations are expected to be updated dynamically in a timely manner. Therefore, the vertex representation of meta-knowledge graph are defined to get parameterized and learned at the training time. Moreover, to encourage the diversity of meta-knowledge encoded in the meta-knowledge graph, the vertex representations are randomly initialized. Analogous to the definition of weight in the prototype-based relational graph Ri in equation 4, the weight between a pair of vertices j and m is constructed as: where Wo and bo represent learnable parameters and γo is a scalar. To enhance the learning of new tasks with involvement of historical knowledge, we query the prototype-based relational graph in the meta-knowledge graph to obtain the relevant knowledge in history. The ideal query mechanism is expected to optimize both graph representations simultaneously at the meta-training time, with the training of one graph facilitating the training of the other. In light of this, we construct a super-graph Si by connecting the prototype-based relational graph Ri with the meta-knowledge graph G for each task Ti. The union of the vertices in Ri and G contributes to the vertices in the super-graph. The edges in Ri and G are also reserved in the super-graph. We connect Ri with G by creating links between the prototype-based relational graph with the meta-knowledge graph. The link between prototype c j i in prototype-based relational graph and vertex h m in metaknowledge graph is weighted by the similarity between them. More precisely, for each prototype c and {h m |∀m ∈ [1, G]} as follows: where γ s is a scaling factor. We denote the intra-adjacent matrix as AS = {AS (c K×G . Thus, for task T i, the adjacent matrix and feature matrix of super-graph After constructing the super-graph Si, we are able to propagate the most relevant knowledge from meta-knowledge graph G to the prototype-based relational graph Ri by introducing a Graph Neural Networks (GNN). In this work, following the "message-passing" framework , the GNN is formulated as: where MP(·) is the message passing function and has several possible implementations (; ; Veličković et al., 2018) is the vertex embedding after l layers of GNN and W (l) is a learnable weight matrix of layer l. The input After stacking L GNN layers, we get the information-propagated feature representation for the prototype-based relational graph Ri as the top-K rows of H After propagating information form meta-knowledge graph to prototype-based relational graph, in this section, we discuss how to learn a well-generalized meta-learner for fast and effective adaptions to new tasks with limited training data. To tackle the challenge of task heterogeneity, in this paper, we incorporate task-specific information to customize the globally shared meta-learner (e.g., initialization here) by leveraging a modulating function, which has been proven to be effective to provide customized initialization in previous studies ). The modulating function relies on well-discriminated task representations, while it is difficult to learn all representations by merely utilizing the loss signal derived from the test set D ts i. To encourage such stability, we introduce two reconstructions by utilizing two auto-encoders. There are two collections of parameters, i.e, CR i andĈR i, which contribute the most to the creation of the task-specific meta-learner. CR i express the raw prototype information without tapping into the meta-knowledge graph, whileĈR i give the prototype representations after absorbing the relevant knowledge from the meta-knowledge graph. Therefore, the two reconstructions are built on CR i andĈR i. To reconstruct CR i, an aggregator AG q (·) (e.g., recurrent network, fully connected layers) is involved to encode CR i into a dense representation, which is further fed into a decoder AG q dec (·) to achieve reconstructions. Compute the similarity between each prototype and meta-knowledge vertex in equation 6 and construct the super-graph Si 8: Apply GNN on super-graph Si and get the information-propagated representationĈR i 9: Aggregate CR i in equation 8 andĈR i in equation 9 to get the representations qi, ti and reconstruction loss Lq, Lt Compute the task-specific initialization θ0i in equation 10 and update end for 12: 13: end while Then, the corresponded task representation qi of CR i is summarized by applying a mean pooling operator over prototypes on the encoded dense representation. Formally, Similarly, we reconstructĈR i and get the corresponded task representation ti as follows: The reconstruction errors in Equations 8 and 9 pose an extra constraint to enhance the training stability, leading to improvement of task representation learning. After getting the task representation qi and ti, the modulating function is then used to tailor the task-specific information to the globally shared initialization θ0, which is formulated as: where Wg and b g is learnable parameters of a fully connected layer. Note that we adopt the Sigmoid gating as exemplary and more discussion about different modulating functions can be found in ablation studies of Section 5. For each task Ti, we perform the gradient descent process from θ0i and reach its optimal parameter θi. Combining the reconstruction loss Lt and Lq with the meta-learning loss defined in equation 1, the overall objective function of ARML is: where µ1 and µ2 are introduced to balance the importance of these three items. Φ represents all learnable parameters. The algorithm of meta-training process of ARML is shown in Alg. 2. The details of the meta-testing process of ARML are available in Appendix A. In this section, we conduct extensive experiments to demonstrate the effectiveness of the ARML on 2D regression and few-shot classification. We compare our proposed ARML with two types of baselines: Gradient-based meta-learning methods: both globally shared methods (MAML , Meta-SGD ) and task-specific methods (MT-Net , MUMO-MAML , HSML (b), BMAML ) are considered for comparison. Other meta-learning methods (non-parametric and black box amortized methods): we select globally shared methods VERSA , Prototypical Network (ProtoNet) , TapNet , we use the GRU as the encoder and decoder in this structure. We adopt one layer GCN with tanh activation as the implementation of GNN in equation 7. For the modulation network, we test sigmoid, tanh and Film modulation, and find that sigmoid modulation achieves best performance. Thus, in the future experiment, we set the sigmoid modulation as modulating function. More detailed discussion about experiment settings are presented in Appendix B. Dataset Description In 2D regression problem, we adopt the similar regression problem settings as; b; ), which includes several families of functions. In this paper, to model more complex relational structures, we design a 2D regression problem rather than traditional 1D regression. Results and Analysis In Figure 2, we summarize the interpretation of meta-knowledge graph (see top figure, and more cases are provided in Figure 8 of Appendix G.4) and the the qualitative (see bottom table) of 10-shot 2D regression. In the bottom table, we can observe that ARML achieves the best performance as compared to competitive gradient-based meta-learning methods, i.e., globally shared models and task-specific models. This finding demonstrates that the meta-knowledge graph is necessary to model and capture task-specific information. The superior performance can also be interpreted in the top figure. In the left, we show the heatmap between prototypes and meta-knowledge vertices (darker color means higher similarity). We can see that sinusoids and line activate V1 and V4, which may represent curve and line, respectively. V1 and V4 also contribute to quadratic and quadratic surface, which also show the similarity between these two families of functions. V3 is activated in P0 of all functions and the quadratic surface and ripple further activate V1 in P0, which may show the different between 2D functions and 3D functions (sinusoid, line, quadratic and cubic lie in the subspace). Specifically, in the right figure, we illustrate the meta-knowledge graph, where we set a threshold to filter the link with low similarity score and show the rest. We can see that V3 is the most popular vertice and connected with V1, V5 (represent curve) and V4 (represent line). V1 is further connected with V5, demonstrating the similarity of curve representation. In the few-shot classification problem, we first use the benchmark proposed in (b), where four fine-grained image classification datasets are included (Aircraft), and FGVCx-Fungi (Fungi)). For each few-shot classification task, it samples classes from one of four datasets. In this paper, we call this dataset as Plain-Multi and each fine-grained dataset as subdataset. Then, to demonstrate the effectiveness of our proposed model for handling more complex underlying structures, in this paper, we increase the difficulty of few-shot classification problem by introducing two image filters: blur filter and pencil filter. Similar as , for each image in PlainMulti, one artistic filters are applied to simulate a changing distribution of few-shot classification tasks. After applying the filters, the total number of subdatasets is 12 and each tasks is sampled from one of them. This data is named as Art-Multi. More detailed descriptions of the effect of different filters is discussed in Appendix C. Following the traditional meta-learning settings, all datasets are divided into meta-training, metavalidation and meta-testing classes. The traditional N-way K-shot settings are used to split training and test set for each task. We adopt the standard four-block convolutional layers as the base learner for ARML and all baselines for fair comparison. The number of vertices of meta-knowledge graph for Plain-Multi and Art-Multi datasets are set as 4 and 8, respectively. Additionally, for the miniImagenet and tieredImagenet , similar as, which tasks are constructed from a single domain and do not have heterogeneity, we compare our proposed ARML with baseline models and present the in Appendix D. Overall Performance Experimental for Plain-Multi and Art-Multi are shown in Table 1 and Table 2, respectively. For each dataset, the performance accuracy with 95% confidence interval is reported. Due to the space limitation, in Art-Multi dataset, we only show the average value of each filter here. The full are shown in Table 8 of Appendix E. In these two tables, first, we can observe that task-specific gradient-based models (MT-Net, MUMOMAML, HSML, BMAML) significantly outperforms globally shared models (MAML, Meta-SGD). Second, compared ARML with other task-specific gradient-based meta-learning methods, the better performance confirms that ARML can model and extract task-specific information more accurately by leveraging the constructed meta-knowledge graph. Especially, the performance gap between the ARML and HSML verifies the benefits of relational structure compared with hierarchical clustering structure. Third, as a gradientbased meta-learning algorithm, ARML can also outperform methods of other research lines (i.e., ProtoNet, TADAM, TapNet and VERSA). Finally, to show the effectiveness of proposed components in ARML, we conduct comprehensive ablation studies in Appendix F. The further demonstrate the effectiveness of prototype-based relational graph and meta-knowledge graph. In this section, we conduct extensive qualitative analysis for the constructed meta-knowledge graph, which is regarded as the key component in ARML. Due to the space limit, we present the on Art-Multi datasets here and the analysis of Plain-Multi with similar observations are discussed in Appendix G.1. We further analyze the effect To analyze the learned meta-knowledge graph, for each subdataset, we randomly select one task as exemplary (see Figure 9 of Appendix G.4 for more cases). For each task, in the left part of Figure 3, we show the similarity heatmap between prototypes and vertices in meta-knowledge graph, where deeper color means higher similarity. V0-V8 and P1-P5 denotes the different vertices and prototypes, respectively. The meta-knowledge graph is also illustrated in the right part. Similar as the graph in 2D regression, we set a threshold to filter links with low similarity and illustrate the rest of them. First, we can see that the V1 is mainly activated by bird and aircraft (including all filters), which may reflect the shape similarity between bird and aircraft. Second, V2, V3, V4 are firstly activated by texture and they form a loop in the meta-knowledge graph. Especially, V2 also benefits images with blur and pencil filters. Thus, V2 may represent the main texture and facilitate the training process on other subdatasets. The meta-knowledge graph also shows the importance of V2 since it is connected with almost all other vertices. Third, when we use blur filter, in most cases (bird blur, texture blur, fungi blur), V7 is activated. Thus, V7 may show the similarity of images with blur filter. In addition, the connection between V7 and V2 and V3 show that classify blur images may depend on the texture information. Fourth, V6 (activated by aircraft mostly) connects with V2 and V3, justifying the importance of texture information to classify the aircrafts. In this paper, to improve the effectiveness of meta-learning for handling heterogeneous task, we propose a new framework called ARML, which automatically extract relation across tasks and construct a meta-knowledge graph. When a new task comes in, it can quickly find the most relevant relations through the meta-knowledge graph and use this knowledge to facilitate its training process. The experiments demonstrate the effectiveness of our proposed algorithm. In the future, we plan to investigate the problem in the following directions: we are interested to investigate the more explainable semantic meaning in the meta-knowledge graph on this problem; Figure 3: Interpretation of meta-knowledge graph on Art-Multi dataset. For each subdataset, we randomly select one task from them. In the left, we show the similarity heatmap between prototypes (P0-P5) and meta-knowledge vertices (V0-V7). In the right part, we show the meta-knowledge graph. we plan to extend the ARML to the continual learning scenario where the structure of meta-knowledge graph will change over time; our proposed model focuses on tasks where the feature space, the label space are shared. We plan to explore the relational structure on tasks with different feature and label spaces. The work was supported in part by NSF awards #1652525 and #1618448. The views and contained in this paper are those of the authors and should not be interpreted as representing any funding agencies. Algorithm 2 Meta-Testing Process of ARML Require: Training data D tr t of a new task T t 1: Construct the prototype-based relational graph R t by computing prototype in equation 2 and weight in equation 4 2: Compute the similarity between each prototype and meta-knowledge vertice in equation 6 and construct the super-graph St 3: Apply GNN on super-graph St and get the updated prototype representationĈR t 4: Aggregate CR t in equation 8,ĈR t in equation 9 and get the representations qt, tt 5: Compute the task-specific initialization θ0t in equation 10 6: Update parameters In 2D regression problem, we set the inner-loop stepsize (i.e., α) and outer-loop stepsize (i.e., β) as 0.001 and 0.001, respectively. The embedding function E is set as one layer with 40 neurons. The autoencoder aggregator is constructed by the gated recurrent structures. We set the meta-batch size as 25 and the inner loop gradient steps as 5. In few-shot image classification, for both Plain-Multi and Art-Multi datasets, we set the corresponding inner stepsize (i.e., α) as 0.001 and the outer stepsize (i.e., β) as 0.01. For the embedding function E, we employ two convolutional layers with 3 × 3 filters. The channel size of these two convolutional layers are 32. After convolutional layers, we use two fully connected layers with 384 and 128 neurons for each layer. Similar as the hyperparameter settings in 2D regression, the autoencoder aggregator is constructed by the gated recurrent structures, i.e., AG t, AG t dec AG q, AG q dec are all GRUs. The meta-batch size is set as 4. For the inner loop, we use 5 gradient steps. For the gradient-based baselines (i.e., MAML, MetaSGD, MT-Net, BMAML. MUMOMAML, HSML), we use the same inner loop stepsize and outer loop stepsize rate as our ARML. As for non-parametric based meta-learning algorithms, both TADAM and Prototypical network, we use the same meta-training and meta-testing process as gradient-based models. Additionally, TADAM uses the same embedding function E as ARML for fair comparison (i.e., similar expressive ability). In this dataset, we use pencil and blur filers to change the task distribution. To investigate the effect of pencil and blur filters, we provide one example in Figure 4. We can observe that different filters in different data distributions. All used filter are provided by OpenCV 1. For miniimagenet and tieredImagenet, since it do not have the characteristic of task heterogeneity, we show the in Table 3 and Table 4, respectively. In this table, we compare our model with other gradient-based meta-learning models (the top baselines are globally shared models and the bottom baselines are task-specific models). Similar as, we also apply the standard 4-block convolutional layers for each baseline. For MT-Net on MiniImagenet, we use the reported in (b), which control the model with the same expressive power. Most task-specific models including ARML achieve the similar performance on the standard benchmark due to the homogeneity between tasks. 48.70 ± 1.84% LLAMA 49.40 ± 1.83% Reptile 49.97 ± 0.32% MetaSGD 50.47 ± 1.87% MT-Net 49.75 ± 1.83% MUMOMAML 49.86 ± 1.85% HSML (b) 50.38 ± 1.85% PLATIPUS 50.13 ± 1.86% ARML 50.42 ± 1.73% Table 4: Performance comparison on the 5-way, 1-shot tieredImagenet dataset. Algorithms 5-way 1-shot Accuracy MAML 51.37 ± 1.80% Reptile 49.41 ± 1.82% MetaSGD 51.48 ± 1.79% MT-Net 51.95 ± 1.83% MUMOMAML 52.59 ± 1.80% HSML (b) 52.67 ± 1.85% ARML 52.91 ± 1.83% We provide the full table of Art-Multi Dataset in Table 8. In this table, we can see our proposed ARML outperforms almost all baselines in every sub-datasets. In this section, we perform the ablation study of the proposed ARML to demonstrate the effectiveness of each component. The of ablation study on 5-way, 5-shot scenario for Art-Multi and PlainMulti datasets are presented in Table 5 and Table 6, respectively. Specifically, to show the effectiveness of prototype-based relational graph, in ablation I, we apply the mean pooling to aggregate each sample and then feed it to interact with meta-knowledge graph. In ablation II, we use all samples to construct the sample-level relational graph without constructing prototype. In ablation III, we remove the links between prototypes. Compared with ablation I, II and III, the better performance of ARML shows that structuring samples can better handling the underlying relations alleviating the effect of potential anomalies by structuring samples as prototypes. In ablation IV, we remove the meta-knowledge graph and use the prototype-based relational graph with aggregator AG q as the task representation. The better performance of ARML demonstrates the effectiveness of meta-knowledge graph for capturing the relational structure and facilitating the classification performance. We further remove the reconstruction loss in ablation V and replace the encoder/decoder structure as MLP in ablation VI. The demonstrate that the autoencoder structure benefits the process of task representation learning and selected encoder and decoder. In ablation VII, we share the gate value within each filter in Convolutional layers. Compared with VII, the better performance of ARML indicates the benefit of customized gate for each parameter. In ablation VIII and IX, we change the modulate function to Film and tanh, respectively. We can see that ARML is not very sensitive to the modulating activation, and sigmoid function is slightly better in most cases. Figure 5: Interpretation of meta-knowledge graph on Plain-Multi dataset. For each subdataset, one task is randomly selected from them. In the left figure, we show the similarity heatmap between prototypes (P1-P5) and meta-knowledge vertices (denoted as E1-E4), where deeper color means higher similarity. In the right part, we show the meta-knowledge graph, where a threshold is also set to filter low similarity links. We first investigate the impact of vertice numbers in meta-knowledge graph. The of Art-Multi (5-way, 5-shot) are shown in Table 7. From the , we can notice that the performance saturates as the number of vertices around 8. One potential reason is that 8 vertices are enough to capture the potential relations. If we have a larger datasets with more complex relations, more vertices may be needed. In addition, if the meta-knowledge graph do not have enough vertices, the worse performance suggests that the graph may not capture enough relations across tasks. In this part, we provide the case study to visualize the task structure of HSML and ARML. HSML is one of representative task-specific meta-learning methods, which adapts transferable knowledge by introducing a task-specific representation. It proposes a tree structure to learn the relations between tasks. However, the structure requires massive labor efforts to explore the optimal structure. By contrast, ARML automatically learn the relation across tasks by introducing the knowledge graph. In addition, ARML fully exploring there types of relations simultaneously, i.e., the prototype-prototype, prototype-knowledge and knowledge-knowledge relations. To compare these two models, we show the case studies of HSML and ARML in Figure 6 and Figure 7. For tasks sampled from bird, bird blur, aircraft and aircraft blur are selected for this comparison. Following case study settings in the original paper (b), for each task, we show the soft-assignment probability to each cluster and the learned hierarchical structure. For ARML, like 3, we show the learned meta-knowledge and the similarity heatmap between prototypes and meta-knowledge vertices. In this figures we can observe that ARML constructs relations in a more flexible way by introducing the graph structure. More specifically, while HSML activate relevant node in a fixed two-layer hierarchical way, ARML provides more possibilities to leverage previous learned tasks by leveraging prototypes and the learned meta-knowledge graph. Published as a conference paper at ICLR 2020 We provide additional case study in this section. In Figure 8, we show the cases of 2D regression and the additional cases of Art-Multi are illustrated in Figure 9. We can see the additional cases also support our observations and interpretations. | Addressing task heterogeneity problem in meta-learning by introducing meta-knowledge graph | 635 | scitldr |
In this paper, a deep boosting algorithm is developed to learn more discriminative ensemble classifier by seamlessly combining a set of base deep CNNs (base experts) with diverse capabilities, e.g., these base deep CNNs are sequentially trained to recognize a set of object classes in an easy-to-hard way according to their learning complexities. Our experimental have demonstrated that our deep boosting algorithm can significantly improve the accuracy rates on large-scale visual recognition. The rapid growth of computational powers of GPUs has provided good opportunities for us to develop scalable learning algorithms to leverage massive digital images to train more discriminative classifiers for large-scale visual recognition applications, and deep learning BID19 BID20 BID3 has demonstrated its outstanding performance because highly invariant and discriminant features and multi-way softmax classifier are learned jointly in an end-to-end fashion. Before deep learning becomes so popular, boosting has achieved good success on visual recognition BID21. By embedding multiple weak learners to construct an ensemble one, boosting BID15 can significantly improve the performance by sequentially training multiple weak learners with respect to a weighted error function which assigns larger weights to the samples misclassified by the previous weak learners. Thus it is very attractive to invest whether boosting can be integrated with deep learning to achieve higher accuracy rates on large-scale visual recognition. By using neural networks to replace the traditional weak learners in the boosting frameworks, boosting of neural networks has received enough attentions BID23 BID10 BID7 BID9. All these existing deep boosting algorithms simply use the weighted error function (proposed by Adaboost ) to replace the softmax error function (used in deep learning) that treats all the errors equally. Because different object classes may have different learning complexities, it is more attractive to invest new deep boosting algorithm that can use different weights over various object classes rather than over different training samples. Motivated by this observation, a deep boosting algorithm is developed to generate more discriminative ensemble classifier by combining a set of base deep CNNs with diverse capabilities, e.g., all these base deep CNNs (base experts) are sequentially trained to recognize different subsets of object classes in an easy-to-hard way according to their learning complexities. The rest of the paper is organized as: Section 2 briefly reviews the related work; Section 3 introduce our deep boosting algorithm; Section 4 reports our experimental ; and we conclude this paper at Section 5. In this section, we briefly review the most relevant researches on deep learning and boosting. Even deep learning has demonstrated its outstanding abilities on large-scale visual recognition BID6 BID19 BID20 BID3 BID4, it still has room to improve: all the object classes are arbitrarily assumed to share similar learning complexities and a multi-way softmax is used to treat them equally. For recognizing large numbers of object classes, there may have significant differences on their learning complexi-ties, e.g., some object classes may be harder to be recognized than others. Thus learning their deep CNNs jointly may not be able to achieve the global optimum effectively because the gradients of their joint objective function are not uniform for all the object classes and such joint learning process may distract on discerning some object classes that are hard to be discriminated. For recognizing large numbers of object classes with diverse learning complexities, it is very important to organize them in an easy-to-hard way according to their learning complexities and learn their deep CNNs sequentially. By assigning different weights to the training samples adaptively, boosting BID15 BID1 BID16 has provided an easy-to-hard approach to train a set of weak learners sequentially. Thus it is very attractive to invest whether we can leverage boosting to learn a set of base deep CNNs sequentially for recognizing large numbers of object classes in an easy-to-hard way. Some deep boosting algorithms have been developed by seamlessly integrating boosting with deep neural networks to improve the performance in practice. BID17 BID18 proposed the first work to integrate Adaboost with neural networks for online character recognition application. BID23 extended the Adaboosting neural networks algorithm for credit scoring. Recently, BID11 developed a deep incremental boosting method which increases the size of neural network at each round by adding new layers at the end of the network. Moreover, BID10 integrated residual networks with incremental boosting and built an ensemble of residual networks via adding one more residual block to the previous residual network at each round of boosting. All these methods combine the merits of boosting and neural networks; they train each base network either using a different training set by resampling with a probability distribution derived from the error weight, or directly using the weighted cost function for the base network. Alternatively, BID14 proposed a margin enforcing loss for multi-class boosting and presented two ways to minimize the ing risk: the one is coordinate descent approach which updates one predictor component at a time, the other way is based on directional functional derivative and updates all components jointly. By applying the first way, i.e., coordinate descent, BID0 designed ensemble learning algorithm for binary-class classification using deep decision trees as base classifiers and gave the data-dependent learning bound of convex ensembles, and BID7 furthermore extended it to multi-class version. By applying the second way, i.e., directional derivative descent, BID9 developed an algorithm for boosting deep convolutional neural networks (CNNs) based on least squares between weights and directional derivatives, which differs from the original method based on inner product of weights and directional derivative in BID14. All above algorithms focus on seeking the optimal ensemble predictor via changing the error weights of samples; they either update one component of the predictor per boosting iteration, or update all components simultaneously. On the other hand, our deep boosting algorithm focuses on combining a set of base deep CNNs with diverse capabilities: large numbers of object classes are automatically organized in an easyto-hard way according to their learning complexities; all these base deep CNNs (base experts) are sequentially learned to recognize different subsets of object classes; and these base deep CNNs with diverse capabilities are seamlessly combined to generate more discriminative ensemble classifier. In this paper, a deep boosting algorithm is developed by seamlessly combining a set of base deep CNNs with various capabilities, e.g., all these base deep CNNs are sequentially trained to recognize different subsets of object classes in an easy-to-hard way according to their learning complexities. Our deep boosting algorithm uses the base deep CNNs as its weak learners, and many well-designed deep networks (such as AlexNet BID6, VGG BID19, ResNet BID3, and huang2016densely), can be used as its base deep CNNs. It is worth noting that all these well-designed deep networks [] optimize their structures (i.e., numbers of layers and units in each layer), their node weights and their softmax jointly in an end-to-end manner for recognizing the same set of object classes. Thus our deep boosting algorithm is firstly implemented for recognizing 1,00 object classes, however, it is straightward to extend our current implementation Normalization: DISPLAYFORM0 Training the t th base deep CNNs f t (x) via Loss t with respect to the importance distribution DISPLAYFORM1 Calculating the error per category for f t (x): ε t (l), (l = 1, ..., C); DISPLAYFORM2 Computing the weighted error for f t (x): DISPLAYFORM3 Setting DISPLAYFORM4.., C), so that hard object classes misclassified by f t (x) can receive larger weights (importances) when training the (t + 1) th base deep CNNs at the next round; 8: end for 9: Ensembling: DISPLAYFORM5 when huge deep networks (with larger capacities) are available in the future and being used as the base deep CNNs. As illustrated in Algorithm 1, our deep boosting algorithm contains the following key components: (a) Training the t th base deep CNNs (base expert) f t (x) by focusing on achieving higher accuracy rates for some particular object classes; (b) Estimating the weighted error function for the t th base deep CNNs f t (x) according to the distribution of importances D t for C object classes; (c) Updating the distribution of importances D t+1 for C object classes to train the (t + 1) th base deep CNNs by spending more efforts on distinguishing the hard object classes which are not classified very well by the previous base deep CNNs; (d) Such iterative training process stops when the maximum number of iterations is reached or a certain level of the accuracy rates is achieved. For the t th base expert f t (x), we firstly employ deep CNNs to map x into more separable feature space h t (x; θ t), followed by a fully connected discriminant layer and a C-way softmax layer. The output of the t th base expert is the predicted multi-class distribution, denoted as DISPLAYFORM0 ⊤, whose each component p t (l|x) is the probability score of x assigned to the object class l, (l = 1, ..., C): DISPLAYFORM1 where θ t and w lt, (l = 1, ..., C) are the model parameters for the t th base expert f t (x). Based on the above probability score, the category label of x can be predicted by the t th base expert as follows: DISPLAYFORM2 Suppose that training set consists of N labeled samples from C classes: DISPLAYFORM3. To train the t th base expert f t (x), the model parameters can be learned by maximizing the objective function in the form of weighted margin as follows: DISPLAYFORM4 where DISPLAYFORM5 Herein the indicator function 1(y i = l) is equal to 1 if y i = l; otherwise zero. N l denotes the number of samples belonging to the l th object class. D t (l) is the normalized importance score for class l in the t th base expert f t (x). By using the distribution of importances [D t,..., D t (C)] to approximate the learning complexities for C object classes, our deep boosting algorithm can push the current base deep CNNs to focus on distinguishing the object classes which are hard classified by the previous base deep CNNs, thus it can support an easy-to-hard solution for large-scale visual recognition.ξ lt measures the margin between the average confidence on correctly classified examples and the average confidence on misclassified examples for the l th object class. If the second item in Eq. FORMULA11 is small enough and negligible, DISPLAYFORM6, then maximizing the objective function in Eq. FORMULA10 is equivalent to maximizing the weighted likelihood. For the t th base expert f t (x), the classification error rate over the training samples in l th object class is as follows: DISPLAYFORM7 This error rate is used to update category weight and the loss function of the next weak learner, and above definition encourages predictors with large margin to improve the discrimination between correct class and incorrect classe competing with it. Error rate calculated by Eq. FORMULA15 is in soft decision with probability; alternatively, we can also simply compute the error rate in hard decision as DISPLAYFORM8 where the hyperparameter λ controls the threshold, and we constrain λ > 1 2 (i.e., 1 2λ < 1) such that the threshold makes sense. The larger the hyper-parameter λ is, the more strict the precision requirement becomes. We then compute the weighted error rate ε t over all classes for f t (x) such that hard object classes are focused on by the next base expert. DISPLAYFORM9 The distribution of importances is initialized equally for all C object classes: DISPLAYFORM10, and it is updated along the iterative learning process by emphasizing the object classes which are heavily misclassified by the previous base deep CNNs: DISPLAYFORM11 where β t should be an increasing function of ε t, and its range should be 0 < β t < 1. It should be pointed out that λϵ t (l) denotes the product of λ and ϵ t (l). Such update of distribution encourages the next base network focusing on the categories that are hard to classify. As shown in Section 4, to guarantee the upper boundary of ratio (the number of heavily misclassified categories over the number of all classes) to be minimized, we set DISPLAYFORM12 Normalization of the updated importances distribution can be easily carried out: DISPLAYFORM13 The distribution of importances is used to: (a) separate the hard object classes (heavily misclassified by the previous base deep CNNs) from the easy object classes (which have been classified correctly by the previous base deep CNNs); (b) estimate the weighted error function for the (t + 1) th base deep CNNs f t+1 (x), so that it can spend more efforts on distinguishing the hard object classes misclassified by the previous base deep CNNs. After T iterations, we can obtain T base deep CNNs (base experts) {f 1, · · ·, f t, · · ·, f T}, which are sequentially trained to recognize different subsets of C object classes in an easy-to-hard way according to their learning complexities. All these T base deep CNNs are seamlessly combined to generate more discriminative ensemble classifier g(x) for recognizing C object classes: DISPLAYFORM14 where DISPLAYFORM15 is a normalization factor. By diversifying a set of base deep CNNs on their capabilities (i.e., they are trained to recognize different subsets of C object classes in an easyto-hard way), our deep boosting algorithm can obtain more discriminative ensemble classifier g(x) to significantly improve the accuracy rates on large-scale visual recognition. To apply such ensembled classifier for recognition, for a given test sample x test, it firstly goes through all these base deep CNNs to obtain T deep representations {h 1, · · ·, h T} and then its final probability score p(l|x test) to be assigned into the lth object class is calculated as follows: DISPLAYFORM16 3.2 SELECTION OF β tIn our deep boosting algorithm, β t is selected to be an increasing function of error rate ε t, with its range. β t is employed in two folds: (i) As seen in Eq., β t helps to update the importance of different categories such that hard object classes are emphasized; (ii) As seen in Eq. FORMULA7 and Eq., reciprocals of β t are the combination coefficients for the final ensemble classifier such that those base experts with low error rate have large weight. The criterion of hard object classes for the tth expert is DISPLAYFORM17 for each t, (t = 1, ..., T); it implies that the lth object class is hard for all T experts. Let ♯{l : ϵ min (l) > 1 2λ } denote the the number of hard object classes for all T experts. Inspired by BID1, we now show that the selection of β t as in Eq. guarantees the upper boundary of ratio (the number of heavily misclassified categories over the number of all classes) to be minimized. It can be shown that for 0 < x < 1 and 0 < α < 1, we have x α ≤ 1 − (1 − x)α. According to Eq.: DISPLAYFORM18 According to Eq. and Eq., we get: DISPLAYFORM19 By substituting Eq. into Eq. FORMULA7, we get DISPLAYFORM20 DISPLAYFORM21 Combining Eq. FORMULA7 with Eq., we get DISPLAYFORM22 To minimize the rightside, we set its partial derivative with respect to β t to zero: DISPLAYFORM23 Since β t only exists in the tth factor, above equation is equivalent to DISPLAYFORM24 Solving it, we find that β t can be optimally selected as: DISPLAYFORM25 We substitute β t = λεt 1−λεt into Eq., and get the upper boundary of ratio (the number of hard object categories over the number of all classes): DISPLAYFORM0 Now we discuss the range for the hyper-parameter λ. Recall that the criterion of hard object classes for the tth expert is ϵ t (l) > From the relation between λε t and λε t (1 − λε t), as illustrated in Fig.1, we can see the effect of λ on the upper boundary of ratio (the number of hard object categories over the number of all classes) in Eq..• In the yellow shaded region, λ ∈ [1 2, 1 2εt], i.e., εt 2 < λε t < 1 2, the condition 0 < β t < 1 is satisfied, and the upper boundary of hard category percentage in Eq. increases with λ increasing, the reason for which is that when λ increases, the precision requirement increases, thus the number of hard categories increases too.• On the right side of the yellow shaded region, λ > 1 2εt, i.e., λε t > 1 2. In this case, the condition 0 < β t = λεt 1−λεt < 1 is not satisfied, thus the update of importance distribution in Eq. can not effectively emphasize the object classes which are heavily misclassified by the previous experts. In hard classification task, large error rates ε t tend to in λε t larger than or approaching 1 2, and β t larger than or approaching 1. The value of λ should be set smaller to alleviate large ε t such that λε t < 1 2 and 0 < β t < 1.• On the left side of the yellow shaded region, λ < 1 2, i.e., The procedure of learning the t th base expert repeatedly adjusts the parameters of the corresponding deep network so as to maximize the objective function O t in Eq.. To maximize O t, it is necessary to calculate its gradients with respect to all parameters, including the weights {w lt} C l=1 and the set of model parameters θ t.For clearance, we denote DISPLAYFORM0 Thus, the probability score of x assigned to the object class l, (l = 1, ..., C), in Eq. can be written as DISPLAYFORM1 Then, the objective function in Eq. can be denoted as DISPLAYFORM2 From above presentations, it can be more clearly seen that the objective is a composite function. DISPLAYFORM3 Herein, J is Jacobi matrix. Such gradients are back-propagated [] through the t th base deep CNNs to fine-tune the weights {w lt} C l=1 and the set of model parameters θ t simultaneously. Denote X as the instance space, denote Ω as the distribution over X, and denote S as a training set of N examples chosen i.i.d according to Ω. We are to investigate the gap between the generalization error on Ω and the empirical error on S.Suppose that F is the set from which the base deep experts are chosen, and let G = Note that g is a C-dim vetor, and each component of g is the category confidence, i.e., g y (x) = p(y|x), (y = 1, ..., C). Based on Eq., the category label of test sample can be predicted by arg max y g y (x) = p(y|x). The ensembled classifier g predicts wrong if g y (x) ≤ maxȳ ̸ =y gȳ(x). The generalization error rate for the final ensembled classifier can be measured by the probability DISPLAYFORM0 DISPLAYFORM1 According to probability theory, for any events B 1 and B 2, P(B 1) ≤ P(B 2) + P(B 2 |B 1), therefore DISPLAYFORM2 where ξ > 0 measures the margin between the confidences from ground-truth and incorrect categories. Using Chernoff bound BID16, the the second term in the right side of Eq. FORMULA7 is bounded as: DISPLAYFORM3 Assume that the base-classifier space F is with VC-dimension d, which can be approximately estimated by the number of neurons ν and the number of weigths ω in the base deep network, i.e., d = O(νω). Recall that S is a sample set of N examples from C categories. Then the effective number of hypotheses for F over S is at most DISPLAYFORM4 Thus, the effective number of hypotheses over S forĜ = DISPLAYFORM5 Applying Devroye Lemma as in BID16, it holds with probability at least 1 − δ Γ that DISPLAYFORM6 where DISPLAYFORM7 Likewise, in probability theory for any events B 1 and B 2, P(B 1) ≤ P(B 2) + P(B 1 |B 2), thus DISPLAYFORM8 Because DISPLAYFORM9 So, combining Eq.(19−23) together, it can be derived that DISPLAYFORM10 As can be seen from above, large margin ξ over the training set corresponds to narrow gap between the generalization error on Ω and the empirical error on S, which leads to the better upper bound of generalization error. In this section we evaluate the proposed algorithms on three real world datasets MNIST , CIFAR-100 BID5, and ImageNet . For MNIST and CIFAR-100, we train all networks from scrach in each AdaBoost iteration stage. On ImageNet, we use the pretrained model as the of iteration #1 and then train weighted models sequentially. The pretrained model is available in TorchVision 1. In each iteration, we adopt the weight initialization menthod proposed by BID2. All the networks are trained using stochastic gradient descent (SGD) with the weight decay 0f 10 −4 and the momentum of 0.9 in the experiments. MNIST dataset consists of 60,000 training and 10,000 test handwritten digit samples. BID18 showed the accuracy improvement of MLP via AdaBoost on MNIST dataset by updating sample weights according to classification errors. For fair comparison, we firstly use the similar network architecture (MLP) as the base experts in experiments. We train two sets of networks with the only difference that one updates weights w.r.t the class errors on training datasets while the other one updates weights w.r.t the sample errors on training datasets. The former is the proposed method in this paper, and the latter is the traditional AdaBoost method. In the two sets of weak learners, we share the same weak learner in iteration #1 and train other two weak learners seperately. For data pre-processing, we normalize data via subtracting means and dividing standard deviations. In the experiment on MNIST, we simply train the network with learning rate 0.01 through out the whole 120 epoches. With our proposed method, the top 1 error on test datasets decreases from 4.73% to 1.87 % after three iterations (table 1). After the interation #1, the top 1 error of our method drops more quickly than the method which update weights w.r.t sample errors. Our method, which updates weights w.r.t the class errors, leverages the idea that different class should have different learning comlexity and should not be treated equally. Through the iterations, our method trains a set of classifiers in an easy-to-hard way. Class APs vary from each weak learner in each iteration to others, increasing for marjor weighted classes while decreasing for minor weighted classes(FIG1 . Therefore, in each iteration, the weighted learner classifier behaves like a expert different from the classfier in the previous iteration. Though some APs for certain classes may decrease in some degree with each weak learner, the boosting models improve the accuracy for hard classes while preservering the accuracy for easy classes ( FIG1 . Our method cordinates the set of weak learners trained sequeentially with diversified capabilities to improve the classfication capability of boosting model. We also carry out experiments on CIFAR-100 dataset. CIFAR-100 dataset consists of 60,000 images from 100 classes. There are 500 training images and 100 testing images per class. We adopt padding, mirroring, shifting for data augumentation and normalization as in BID3 BID4 . In training stage, we hold out 5,000 training images for validation and leave 45,000 for training. Because the error per class on training datasets approaches zero and training errors could be all zeros with even simple networks, we update the category distribution w.r.t the class errors on validation datasets. We do not use any sample of validation datasets to update parameters of the networks itself. When training networks on CIFAR-100, the initial learning rate is set to 0.1 and divided by 0.1 at epoch. Similar to BID2 BID4, we train the network for 300 epoches. We show the with various models including ResNet56(λ = 0.7) and DenseNet-BC(k=12) BID4 on test set. The performances of emsembled classifier with different number of base networks are shown in the middle two rows of (table 2).As illustrated in section 3.3, λ controls the weight differences among classes. In comparison, we use λ={0.7, 0.5, 0.1}. As shown in FIG2 -left, with smaller lambda, the weitht differences become bigger. We use ResNet model in BID3 with 56 layers on CIFAR-100 datasets. Overall, the models with lambda=0.7 performs the best, ing in 24.15% test error after four iterations. Comparing with lambda=0.5 and lambda=0.7, we find that both model performs well in the initial several iterations, but the model with lambda=0.7 would converge to a better optimal(FIG2 . However, with lambda=0.1 which weights classes more discriminately, top 1 error fluctuates along the iterations ( FIG2 . We conclude that lambda should be merely used to insure that the value of β is below 0.5 and may harm the performance in the ensemble models if set to a low value. In FIG3, we show the comparison of weak leaner #1 and weak learner #2 without boosting. Though with minor exeptions, most classes with low APs improve their class APs in the proceeding weak learner. Our method is based on the motivation that different classes should have different learning comlexity. Thus, those classes with higher learning complexity should be paid more attention along the iterations. Based on the class AP of the privious iteration, we suppose those classes with lower APs should have higher learning complexity and be paid for attention in the subsequent iterations. We furthermore carry out experiments on ILSVRC2012 Classification dataset which consists of 1.2 million images for training, and 50,000 for validation. There are 1,000 classes in the dataset. For data augmentation and normalization, we adopt scaling, ramdom cropping and horizontal flipping as in BID3 BID4. Similar to the experiments on CIFAR-100, the error per class on training datasets approaches zero, we update the category distribution w.r.t the class errors on validation datasets. Since the test dataset of ImageNet are not available, we just report the on the validation sets, following BID3; BID4 for ImageNet. When we train ResNet50 networks on ImageNet, the initial learning rates are set to 0.1 and divided by 0.1 at epoch. Similar to BID2 BID4 again, we train the network for 90 epoches. The performances of emsembled classifier with different number of base networks are shown in the bottom rows of (table 2). These base ResNet networks with diverse capabilities are combined to generate more discriminative ensemble classifier. In this paper, we develop a deep boosting algorithm is to learn more discriminative ensemble classifier by combining a set of base experts with diverse capabilities. The base experts are from the family of deep CNNs and they are sequentially trained to recognize a set of object classes in an easy-to-hard way according to their learning complexities. As for the future network, we would like to investigate the performance of heterogeneous base deep networks from different families. | A deep boosting algorithm is developed to learn more discriminative ensemble classifier by seamlessly combining a set of base deep CNNs. | 636 | scitldr |
We present a method for translating music across musical instruments and styles. This method is based on unsupervised training of a multi-domain wavenet autoencoder, with a shared encoder and a domain-independent latent space that is trained end-to-end on waveforms. Employing a diverse training dataset and large net capacity, the single encoder allows us to translate also from musical domains that were not seen during training. We evaluate our method on a dataset collected from professional musicians, and achieve convincing translations. We also study the properties of the obtained translation and demonstrate translating even from a whistle, potentially enabling the creation of instrumental music by untrained humans. Humans have always created music and replicated it -whether it is by singing, whistling, clapping, or, after some training, playing improvised or standard musical instruments. This ability is not unique to us, and there are many other vocal mimicking species that are able to repeat music from hearing. Music is also one of the first domains to be digitized and processed by modern computers and algorithms. It is, therefore, somewhat surprising that in the core music task of mimicry, AI is still much inferior to biological systems. In this work, we present a novel way to produce convincing musical translation between instruments and styles. For example 1, we convert the audio of a Mozart symphony performed by an orchestra to an audio in the style of a pianist playing Beethoven. Our ability builds upon two technologies that have recently become available: (i) the ability to synthesize high quality audio using autoregressive models, and (ii) the recent advent of methods that transform between domains in an unsupervised way. The first technology allows us to generate high quality and realistic audio and thanks to the teacher forcing technique, autoregressive models are efficiently trained as decoders. The second family of technologies contributes to the practicality of the solution, since posing the learning problem in the supervised setting, would require a parallel dataset of different musical instruments. In our architecture, we employ a single, universal, encoder and apply it to all inputs (universal here means that a single encoder can address all input music, allowing us to achieve capabilities that are known as universal translation). In addition to the advantage of training fewer networks, this also enables us to convert from musical domains that were not heard during training to any of the domains encountered. The key to being able to train a single encoder architecture, is making sure that the domain-specific information is not encoded. We do this using a domain confusion network that provides an adversarial signal to the encoder. In addition, it is important for the encoder not to memorize the input signal but to encode it in a semantic way. We achieve this by distorting the input audio by random local pitch modulation. During training, the network is trained as a denoising autoencoder, which recovers the undistorted version of the original input. Since the distorted input is no longer in the musical domain of the output, the network learns to project out-of-domain inputs to the desired output domain. In addition, the network no longer benefits from memorizing the input signal and employs a higher-level encoding. Asked to convert one musical instrument to another, our network shows a level of performance that seems to approach that of musicians. When controlling for audio quality, which is still lower for generated music, it is many times hard to tell which is the original audio file and which is the output of the conversion that mimics a completely different instrument. The network is also able to successfully process unseen musical instruments such as drums, or other sources, such as whistles. Domain Transfer Recently, there has been a considerable amount of work, mostly on images and text, which performs unsupervised translation between domains A and B, without being shown any matching pairs, i.e., in a completely unsupervised way. Almost all of this work employs GAN constraints BID8, in order to ensure a high level of indistinguishability between the translations of samples in A and samples from the domain B. In our work, the output is generated by an autoregressive model and training takes place using the ground truth output of the previous time steps ("teacher forcing"), instead of the predicted ones. A complete autoregressive inference is only done during test time, and it is not practical to apply such inference during training in order to get a realistic generated ("fake") sample for the purpose of training the GAN.Another popular constraint is that of circularity, namely that by mapping from A to B and back to A a reconstruction of the original sample is obtained BID39 BID38. In our work, for the same reason mentioned above, the output during training does not represent the future test time output, and such a constraint is unrealistic. An application of circularity in audio was present in BID13, where a non-autoregressive model between vocoder features is used to convert between voices in an unsupervised way. Cross domain translation is not restricted to a single pair of domains. The recent StarGAN BID1 method creates multiple cycles for mapping between multiple (more than two) domains. The method employs a single generator that receives as input the source image as well as the specification of the target domain. It then produces the analog "fake" image from the target domain. Our work employs multiple decoders, one per domain, and attempts to condition a single decoder on the selection of the output domain failed to produce convincing . UNIT BID20 employs an encoder-decoder pair per each domain, where the latent spaces of the domains are assumed to be shared. This is achieved by sharing the network layers that are distant from the image (the top layers of the encoder and the bottom layers of the decoder), similarly to CoGAN BID19. Cycle-consistency is also added, and structure is added to the latent space using a variational autoencoder BID17 loss terms. Our method employs a single encoder, which eliminates the need for many of the associated constraints. In addition, we do not impose a VAE loss term BID17 on the latent space of the encodings and instead employ a domain confusion loss BID6. The work of BID21 investigates the problem of learning invariant representations by employing the Maximum Mean Discrepancy (MMD), which we do not use. Audio Synthesis WaveNet (van den BID36) is an autoregressive model that predicts the probability distribution of the next sample, given the previous samples and an input conditioning signal. Its generated output is currently considered of the highest naturalness, and is applied in a range of tasks. In BID26, the authors have used it for denoising waveforms by predicting the middle ground-truth sample from its noisy input support. Recent contributions in Text-To-Speech(TTS) BID25 BID30 have successfully conditioned wavenet on linguistic and acoustic features to obtain state of the art performance. In VQ-VAE (van den BID35, voice conversion was obtained by employing a variational autoencoder that produces a quantized latent space that is conditioned on the speaker identity. Similar to our work, the decoder is based on WaveNet. However, we impose a greater constraint on the latent space by (a) having a universal encoder, forcing the embeddings of all domains to lie in the same space, yet (b) training a separate reconstructing decoder for each domain, provided that (c) the latent space is domain independent, thereby reducing source-target pathways memorization, which is also accomplished by (d) employing augmentation to distort the input signal. Invariance is achieved in VQ-VAE through the strong bottleneck effect achieved by discretization. Despite some effort, we were not able to use a discrete latent space here. Recently, BID4 explored discretization as a method to capture long-range dependencies in unconditioned music generation, for up to 24 seconds. We focus on translation, and the conditioning on the source signal carries some long-range information on the development of the music. Consider an analogy to a myopic language translation system, where the input is a story in English and the output is a story in Spanish. Even if the translation occurs one sentence at a time, the main theme of the story is carried by the "conditioning" on the source text. The architecture of the autoencoder we employ is the wavenet-autoencoder presented in BID5. In comparison to this work, our inputs are not controlled and are collected from consumer media. Our overall architecture differs in that multiple decoders and an additional auxiliary network, which is used for disentangling the domain information from the other aspects of the music representation, are trained and by the addition of an important augmentation step. In the supervised learning domain, an audio style transfer between source and target spectrograms was performed with sequence-to-sequence recurrent networks BID10. This method requires matching pairs of samples played on different instruments. In another fully supervised work BID9 ), a graphical model aimed at modeling polyphonic tones of Bach was trained on notes, capturing the specificity of Bach's chorales. This model is based on RNNs and requires a large corpus of notes of a particular instrument produced with a music editor. Style Transfer Style transfer is often confused with domain translation and the distinction is not always clear. In the task of style transfer, the "content" remains the same between the input and the output, but the "style" is modified. Notable contributions in the field include BID7 BID34 BID12, which synthesize a new image that minimizes the content loss with respect to the content-donor sample and the style loss with respect to one or more samples of a certain style. The content loss is based on comparing the activations of a network training for an image categorization task. The style loss compares the statistics of the activations in various layers of the categorization layer. An attempt at audio style transfer is described in BID0.Concatenative Synthesis In the computer music and audio effects literature, the conversion task we aim to solve is tackled by concatenating together short pieces of audio from the target domain, such that the output audio resembles the input audio from the source domain BID37; BID28; Zils & Pachet FORMULA0; BID31. The method has been extensively researched, see the previous work section of BID24 and the online resource of BID29. A direct comparison to such methods is challenging, since many of the methods have elaborate interfaces with many tunable parameters that vary from one conversion task to the next. To the extent possible, we compare with some of the published in Sec. 4.2, obtaining what we believe to be clearly superior . Our domain translation method is based on training multiple autoencoder pathways, one per musical domain, such that the encoders are shared. During training, a softmax-based reconstruction loss is applied to each domain separately. The input data is randomly augmented, prior to applying the encoder, in order to force the network to extract high-level semantic features, instead of simply memorizing the data. In addition, a domain confusion loss BID6 ) is applied to the latent space to ensure that the encoding is not domain-specific. A diagram of the translation architecture is shown in FIG0. We reuse an existing autoencoder architecture that is based on a WaveNet decoder and a WaveNetlike dilated convolution encoder BID5. The WaveNet of each decoder is conditioned on the latent representation produced by the encoder. In order to reduce the inferencetime, the nv-wavenet CUDA kernels provided by NVIDIA (https://github.com/NVIDIA/ nv-wavenet) were used after modification to better match the architecture suggested by van den Oord et al. FORMULA0, as described below. The encoder is a fully convolutional network that can be applied to any sequence length. The network has three blocks of ten residual-layers, a total of thirty layers. Each residual-layer contains a RELU nonlinearity, a non-causal dilated convolution with an increasing kernel size, a second RELU, and a 1 × 1 convolution followed by the residual summation of the activations before the first RELU. There is a fixed width of 128 channels. After the three blocks, there is an additional 1 × 1 layer. An average pooling with a kernel size of 50 milliseconds (800 samples) follows in order to obtain an encoding in R 64, which implies a temporal down sampling by a factor of ×12.5.The encoding is upsampled temporally to the original audio rate, using nearest neighbor interpolation and is used to condition a WaveNet decoder. The conditioning signal is passed through a 1 × 1 layer that is different for each WaveNet layer. The audio (both input and output) is quantized using 8-bit mu-law encoding, similarly to both BID36 BID5, which in some inherent loss of quality. The WaveNet decoder has either four blocks of 10 residual-layers and a ing receptive field of 250 milliseconds (4,093 samples), as in BID5, or 14 layer blocks and a much larger receptive field of 4 seconds. Each residual-layer contains a causal dilated convolution with an increasing kernel size, a gated hyperbolic tangent activation, a 1 × 1 convolution followed by the residual summation of the layer input, and a 1 × 1 convolution layer which introduces a skip connection. Each residual-layer is conditioned on the encoding described above. The summed skip connections are passed through two fully connected layers and a softmax activation to output the next timestep probability. A detailed diagram of the WaveNet autoencoder is shown in FIG0.We modify the fast nv-wavenet CUDA inference kernels, which implement the architecture suggested by BID25, and create efficient WaveNet kernels that implement the WaveNet architecture suggested by BID5. Specifically, we make the following modifications to nv-wavenet: (i) we add initialization of skip connections with previous WAV samples, (ii) we increase the kernel capacity to support 128 residual channels and (iii) we also add the conditioning to the last fully connected layer. In order to improve the generalization capability of the encoder, as well as to enforce it to maintain higher-level information, we employ a dedicated augmentation procedure that changes the pitch locally. The ing audio is of a similar quality but is slightly out of tune. Specifically, we perform our training on segments of one second length. For augmentation, we uniformly select a segment of length between 0.25 and 0.5 seconds, and modulate its pitch by a random number between -0.5 and 0.5 of half-steps, using librosa BID22. Let s j be an input sample from domain j = 1, 2,..., k, k being the number of domains employed during training. Let E be the shared encoder, and D j the WaveNet decoder for domain j. Let C be the domain classification network, and O(s, r) be the random augmentation procedure applied to a sample s with a random seed r. The network C predicts which domain the input data came from, based on the latent vectors. It applies three 1D-convolution layers, with the ELU BID2 nonlinearity. The last layer projects the vectors to dimension k and the vectors are subsequently averaged to a single R k vector. A detailed diagram of network C is shown as part of FIG0.During training, the domain classification network C minimizes the classification loss DISPLAYFORM0 and the music to music autoencoders j = 1, 2,... are trained with the loss DISPLAYFORM1 where L(o, y) is the cross entropy loss applied to each element of the output o and the corresponding element of the target y separately. Note that the decoder D j is an autoregressive model that is conditioned on the output of E. During training, the autoregressive model is fed the target output s j from the previous time-step, instead of the generated output. To perform the actual transformation from a sample s from any domain, even from an unseen musical domain, to output domain j, we apply the autoencoder of domain j to it, without applying the distortion. The new sampleŝ j is, therefore, given as D j (E(s)). The bottleneck during inference is the WaveNet autoregressive process, which is optimized by the dedicated CUDA kernels. We conduct music translation experiments, using a mix of human evaluation and qualitative analysis, in order to overcome the challenges of evaluating generative models. The experiments were done in two phases. In the first phase, described in an earlier technical report BID23, we train our network on six arbitrary classical musical domains: (i) Mozart's symphonies conducted by Karl Böhm, (ii) Haydn's string quartets, performed by the Amadeus Quartet, (iii) J.S Bach's cantatas for orchestra, chorus and soloists, (iv) J.S Bach's organ works, (v) Beethoven's piano sonatas, performed by Daniel Barenboim, and (vi) J.S Bach's keyboard works, played on Harpsichord. The music recordings by Bach (iii,iv,vi) are from the Teldec 2000 Complete Bach collection. In the second phase, in order to allow reproducibility and sharing of the code and models, we train on audio data from MusicNET BID33. Domains were chosen as the largest domains that show variability between composers and instruments. The following six domains were selected: (i) J.S Bach's suites for cello, (ii) Beethoven's piano sonatas, (iii) Cambini's Wind Quintet, (iv) J.S Bach's fugues, played on piano, (v) Beethoven's violin sonatas and (vi) Beethoven's string quartet. This public dataset is somewhat smaller than the data used in phase one. The phases differ in the depth of the decoders: in the first phase, we employed blocks of ten layers, while in the second, we shifted to larger receptive fields and blocks of 14. The training and test splits are strictly separated by dividing the tracks (or audio files) between the two sets. The segments used in the evaluation experiments below were not seen during training. During training, we iterate over the training domains, such that each training batch contains 16 randomly sampled one second samples from a single domain. Each batch is first used to train the domain classification network C, and then to train the universal encoder and the domain decoder, given the updated discriminator. The method was implemented in the PyTorch framework, and trained on eight Tesla V100 GPUs for a total of 6 days. We used the ADAM optimization algorithm with a learning rate of 10 −3 and a decay factor of 0.98 every 10,000 samples. We weighted the confusion loss with λ = 10 −2. The first set of experiments compared the method to human musicians using the phase one network. Since human musicians, are equipped by evolution with music skills, selected among their peers according to their talent, and who have trained for decades, we do not expect to do better than humans at this point. To perform this comparsion, music from domain X was converted to piano, for various X. The piano was selected for practical reasons: pianists are in higher availability than other musicians and a piano is easier to produce than, e.g., an orchestra. Three professional musicians with a diverse were employed for the conversion task: E, who is a conservatory graduate with an extensive in music theory and piano performance, and also specializes in transcribing music; M, who is a professional producer, composer, pianist and audio engineer, who is an expert in musical transcription; and A who is a music producer, editor, and a skilled player of keyboards and other instruments. The task used for comparison was to convert 60 segments of five seconds each to piano. Three varied sources were used. 20 of the segments were from Bach's keyboard works, played on a Harpsichord, and 20 others were from Mozart's 46 symphonies conducted by Karl Böhm, which are orchestral works. The last group of 20 segments was a mix of three different domains that were not encountered during training -Swing Jazz, metal guitar riffs, and instrumental Chinese music. The 60 music segments were encoded by the universal encoder and decoded by the WaveNet trained on Beethoven's piano sonatas, as performed by Daniel Barenboim. In order to compare between the conversions, we employed human evaluation, which is subjective and could be a mix of the assessment of the audio quality and the assessment of the translation itself. This limits the success of the automatic method, since the quality of the algorithm's output is upper bounded by the neural network architecture and cannot match that of a high quality recording. Since there is a trade-off between the fidelity to the original piece and the ability to create audio in the target domain, we present two scores: audio quality of the output piano and a matching score for the translation. While one can argue that style is hard to define and, therefore, such subjective experiments are not well founded, there are many similar MOS experiments in image to image translation, e.g., BID18, and indeed MOS studies are used exactly where the translation metric is perceptual and subjective. Specifically, Mean Opinion Scores (MOS) were collected using the CrowdMOS BID27 package. Two questions were asked: FORMULA0 what is the quality of the audio, and how well does the converted version match the original. The are shown in Tab. 1. It shows that our audio quality is considerably lower than the produced by humans, using a keyboard connected to a computer (which should be rated as near perfect and makes any other audio quality in the MOS experiment pale in comparison). Regarding the translation success, the conversion from Harpsichord is better than the conversion from Orchestra. Surprisingly, the conversion from unseen domains is more successful than both these domains. In all three cases, our system is outperformed by the human musicians, whose conversions will soon be released to form a public benchmark. Lineup experiment In another set of experiments, we evaluate the ability of persons to identify the source musical segment from the conversions. We present, in each test, a set of six segments. One segment is a real segment from a random domain out of the ones used to train our network, and five are the associated translations. We shuffle the segments and ask which is the original one and which are conversions. To equate the quality of the source to that of the translations and prevent identification by quality, we attach the source after passing it through its domain's autoencoder. The translation is perfectly authentic, if the distribution of answers is uniform. However, the task is hard to define. In a first attempt, Amazon Mechanical Turk (AMT) freelancers tended to choose the Mozart domain as the source, regardless of the real source and the presentation order, probably due to its relatively complex nature in comparison to the other domains. This is shown in the confusion matrix of FIG1. We, therefore, asked two amateur musicians (T, a guitarist, and S a dancer and a drummer with a in piano) and the professional musician A (from the first experiment) to identify the source sample out of the six options, based on authenticity. The , in FIG1 show that there is a great amount of confusion. T and A failed in most cases, and A tended to show a similar bias to the AMT freelancers. S also failed to identify the majority of the cases, but showed coherent confusion patterns between pairs of instruments. NSynth pitch experiments NSynth BID5 is an audio dataset containing samples of 1,006 instruments, each sample labeled with a unique pitch, timbre, and envelope. Each sample is a four second monophonic 16kHz snippet, ranging over every pitch of a standard MIDI piano as well as five different velocities. It was not seen during training of our system. We measure the correlation of embeddings retrieved using the encoder of our network across pitch for multiple instruments. The first two columns (from the left hand side) of FIG2 show selfcorrelations, while the third column shows correlation across instruments. As can be seen, the embedding encodes pitch information very clearly, despite being trained on complex polyphonic audio. The cosine similarity between the two instruments for the same pitch is, on average, 0.90-0.95 (mean of the diagonal), depending on the pair of instruments. In order to freely share our trained models and allow for maximal reproducibility, we have retrained the network with data from MusicNet BID33. The following experiments are based on this network and are focused on understanding the properties of the conversion. The description is based on the supplementary media available at musictranslation.github.io.Are we doing more than timbral transfer? Is our system equivalent to pitch estimation followed by rendering with a different instrument, or can it capture stylistic musical elements? We demonstrate that our system does more than timbral transfer in two ways. Consider the conversions presented in supplementary S1, which consist of many conversion examples from each of the domains to every other domain. There are many samples where it is clear that more than timbral transfer is happening. For example, when converting Beethoven's string quartet music to a wind quintet (Sample #30), an ornamentation note is added in the output that is nowhere to be found in the input music; when converting Beethoven's violin sonata to Beethoven's solo piano (Samples #24 and #23), the violin line seamlessly integrated into the piano part; when converting Beethoven's solo piano music to Bach's solo cello (Sample #9), the bass line of the piano is converted to cello. It is perhaps most evident when converting solo piano to piano and violin; an identity transformation would have been a valid translation, but the network adds a violin part to better match the output distribution. To further demonstrate the capabilities of our system, we train a network on two piano domains: MusicNet solo piano recordings of Bach and Beethoven. We reduce the size of the latent space to 8 to limit the ability of the original input to be repeated exactly, thereby encouraging the decoders to be more "creative" than they normally would, with the goal of observing how decoders trained on different training data will use their freedom of expression. The input we employ is a simple MIDI synthesized as a piano. Supplementary S2 presents a clear stylistic difference: one can hear some counterpoint in the Bach sample, whereas the Beethoven output exhibits a more "Sturm und Drang" feeling, indicating that the network learns stylistic elements from the training data. DISPLAYFORM0 Comparison with previous methods We compare our with those of Concatenative Synthesis methods in supplementary S3. To do that, we use our system to translate target files from published of two works in that field, and present the methods' side-by-side. Samples 1 and 2 are compared with the published of BID3, a work comparing several Concatenative Synthesis methods, and uses a violin passage as source audio input. Sample 3 is compared with MATConcat , which uses a corpus of string quartets as a source material. Sample 1 is a fugue performed on a piano. We show that we are able to convincingly produce string quartet and wind ensemble renditions of the piece. To push our model to its boundaries, we also attempt to convert the polyphonic fugue to solo cello, obtaining a rather convincing . We believe that our surpass in naturalness those obtained by concatenative methods. Sample 2 is an orchestra piece, which for our system is data that has never been seen during training. We convert it to piano, solo cello and a wind quintet, achieving convincing , that we believe surpass the concatenative synthesis . Sample 3 is another orchestra piece, which includes a long drum roll, followed by brass instruments. It is not quite well-defined how to convert a drum roll to a string quartet, but we believe our rendition is more coherent. Our method is able to render the brass instruments and orchestral music after the drum roll more convincingly than MATConcat, which mimics the audio volume but loses most musical content. Universality Note that our network has never observed drums, brass instruments or an entire orchestra during training, and, therefore, the of supplementary S3 also serve to demonstrate the versatility of the encoder module ing from our training procedure (and so do those of S2). Supplementary S4 presents more out-of-domain conversion , including other domains from MusicNet, whistles, and even spontaneous hand clapping. The universality property hinges on the success of training a domain-independent representation. As can be seen in the confusion matrices given in FIG3, the domain classification network does not do considerably better than chance when the networks converge. Ablation Analysis We conducted three ablation studies. In the first study, the training procedure did not use the augmentation procedure of Sec. 3.2. This ed in a learning divergence during training, and we were unable to obtain a working model trained without augmentation, despite considerable effort. In order to investigate the option of not using augmentation in a domain where training without it converges, we have applied our method to the task of voice conversion. Our experiments show a clear advantage for applying augmentation, see Appendix A. Additional experiments were conducted, for voice conversion, using the VQ-VAE method of van den BID35.In the second ablation study, the domain classification network was not used (λ = 0). Without requiring that the shared encoder remove domain-specific information from the latent representation of the input, the network learned to simply encode all information in the latent vectors, and all decoders learned to turn this information back to the original waveform. This ed in a model that does not do any conversion at all. Finally, we performed an ablation study on the latent code size, in which we convert a simple MIDI clip to the Beethoven domain and the Bach domain. Samples are available as supplementary S6. As can be heard, a latent dimensionality of 64 tends to reconstruct the input (unwanted memorization). A model with a latent space of 8 (used in S2) performs well. A model with a latent dimensionality of 4 is more creative, less related to the input midi, and also suffers from a reduction in quality. Semantic blending We blend two encoded musical segments linearly in order to check the additivity of the embedding space. For that, we have selected two random five second segments i and j from each domain and embedded both using the encoder, obtaining e i and e j. We then combine the embeddings as follows: starting with 3.5 seconds from e i, we combine the next 1.5 seconds of e i with the first 1.5 seconds of e j using a linear weighting with weights 1−t/1.5 and t/1.5 respectively, where t ∈ [0, 1.5]. We then use the various decoders to generate audio. The are natural and the shift is completely seamless, as far as we observe. See supplementary S5 for samples. The samples also demonstrate that in the scenario we tested, one can alternatively use fade-in and fade-out to create a similar effect. We therefore employ a second network that is used for a related task of voice conversion (see Appendix A) and demonstrate that in the case of voice conversion, latent space embedding is clearly superior to converting the audio itself. These samples can also be found in supplementary S5 for details and samples. Our work demonstrates capabilities in music conversion, which is a high-level task (a terminology that means that they are more semantic than low-level audio processing tasks), and could open the door to other high-level tasks, such as composition. We have initial that we find interesting: by reducing the size of the latent space, the decoders become more "creative" and produce outputs that are natural yet novel, in the sense that the exact association with the original input is lost. This work is part of Adam Polyak's Ph. D thesis research conducted at Tel Aviv University. We further evaluate our method on the task of voice conversion, which is not as challenging as the music conversion task explored in this work. It is, therefore, a convenient test bed when comparing to the VQ-VAE (van den) method, which, as we mention in the paper, did not perform well in our music-based experiments, and which was shown by the authors to work on voice conversion. In addition, as mentioned in Sec. 4.2, successful training on the music domains requires data augmentation. In voice conversion, we were able to successfully train our network even without data augmentation, and we can therefore perform a direction comparison. We apply our method, a variant without data augmentation, and the VQVQE method on three publicly available datasets: "Nancy" from Blizzard 2011 BID15, Blizzard 2013 BID16 and LJ BID11 dataset. The generated samples are obtained by converting an audio produced by the Google Cloud TTS robot to these three voices. The models are evaluated by their quality using the Mean Opinion Score, as obtained with the CrowdMOS BID27 package. As can be seen in Tab. 2, samples generated by our WaveNet autoencoder based method are of higher quality than those of VQ-VAE. A second is that the method trains well in voice conversion, even without the data augmentation. However, this leads to inferior . We slightly modify the WaveNet autoencoder used in our method for the voice conversion task. Specifically, we modify the size of the latent encoding to be in R 48, instead of R 64. The rest of the model details remain the same as in the music translation task. In our implementation of the VQ-VAE, the encoder was composed of 6 one-dimensional convolution layer with a ReLU activation. As in the original paper, the convolutions were with a stride of 2 and kernel size of 4. Therefore, the mu-law quantized waveform is temporally downsampled by ×64. We used a dictionary of 512 vectors in R 128. The obtained quantized encoding is upsampled and serves to condition a decoder which reconstructs the input waveform. Here as well, we follow the original paper and implement a single WaveNet decoder for all three speaker domains, this is achieved by concatenating the quantized encoding with a learned speaker embedding. We train the VQ-VAE using dictionary updates with Exponential Moving Averages (EMA) with a decay parameter of γ = 0.99 and a commitment parameter of β = 1. | An automatic method for converting music between instruments and styles | 637 | scitldr |
Most existing defenses against adversarial attacks only consider robustness to L_p-bounded distortions. In reality, the specific attack is rarely known in advance and adversaries are free to modify images in ways which lie outside any fixed distortion model; for example, adversarial rotations lie outside the set of L_p-bounded distortions. In this work, we advocate measuring robustness against a much broader range of unforeseen attacks, attacks whose precise form is unknown during defense design. We propose several new attacks and a methodology for evaluating a defense against a diverse range of unforeseen distortions. First, we construct novel adversarial JPEG, Fog, Gabor, and Snow distortions to simulate more diverse adversaries. We then introduce UAR, a summary metric that measures the robustness of a defense against a given distortion. Using UAR to assess robustness against existing and novel attacks, we perform an extensive study of adversarial robustness. We find that evaluation against existing L_p attacks yields redundant information which does not generalize to other attacks; we instead recommend evaluating against our significantly more diverse set of attacks. We further find that adversarial training against either one or multiple distortions fails to confer robustness to attacks with other distortion types. These underscore the need to evaluate and study robustness against unforeseen distortions. Neural networks perform well on many benchmark tasks yet can be fooled by adversarial examples or inputs designed to subvert a given model. Adversaries are usually assumed to be constrained by an L ∞ budget (; ;), while other modifications such as adversarial geometric transformations, patches, and even 3D-printed objects have also been considered (; ;). However, most work on adversarial robustness assumes that the adversary is fixed and known in advance. Defenses against adversarial attacks are often constructed in view of this specific assumption . In practice, adversaries can modify and adapt their attacks so that they are unforeseen. In this work, we propose novel attacks which enable the diverse assessment of robustness to unforeseen attacks. Our attacks are varied (§2) and qualitatively distinct from current attacks. We propose adversarial JPEG, Fog, Gabor, and Snow attacks (sample images in Figure 1). We propose an unforeseen attack evaluation methodology (§3) that involves evaluating a defense against a diverse set of held-out distortions decoupled from the defense design. For a fixed, held-out distortion, we then evaluate the defense against the distortion for a calibrated range of distortion sizes whose strength is roughly comparable across distortions. For each fixed distortion, we summarize the robustness of a defense against that distortion relative to a model adversarially trained on that distortion, a measure we call UAR. We provide code and calibrations to easily evaluate a defense against our suite of attacks at https://github.com/iclr-2020-submission/ advex-uar. By applying our method to 87 adversarially trained models and 8 different distortion types (§4), we find that existing defenses and evaluation practices have marked weaknesses. Our show New Attacks JPEG Fog Gabor Snow Figure 1: Attacked images (label "espresso maker") against adversarially trained models with large ε. Each of the adversarial images above are optimized to maximize the classification loss. that existing defenses based on adversarial training do not generalize to unforeseen adversaries, even when restricted to the 8 distortions in Figure 1. This adds to the mounting evidence that achieving robustness against a single distortion type is insufficient to impart robustness to unforeseen attacks (; ; Tramèr &). Turning to evaluation, our demonstrate that accuracy against different L p distortions is highly correlated relative to the other distortions we consider. This suggest that the common practice of evaluating only against L p distortions to test a model's adversarial robustness can give a misleading account. Our analysis demonstrates that our full suite of attacks adds substantive attack diversity and gives a more complete picture of a model's robustness to unforeseen attacks. A natural approach is to defend against multiple distortion types simultaneously in the hope that seeing a larger space of distortions provides greater transfer to unforeseen distortions. Unfortunately, we find that defending against even two different distortion types via joint adversarial training is difficult (§5). Specifically, joint adversarial training leads to overfitting at moderate distortion sizes. In summary, we propose a metric UAR to assess robustness of defenses against unforeseen adversaries. We introduce a total of 4 novel attacks. We apply UAR to assess how robustness transfers to existing attacks and our novel attacks. Our demonstrate that existing defense and evaluation methods do not generalize well to unforeseen attacks. We consider distortions (attacks) applied to an image x ∈ R 3×224×224, represented as a vector of RGB values. Let f: R 3×224×224 → R 100 be a model mapping images to logits 1, and let (f (x), y) denote the cross-entropy loss. For an input x with true label y and a target class y = y, our adversarial attacks attempt to find x such that 1. the attacked image x is obtained by applying a constrained distortion to x, and 2. the loss (f (x), y ) is minimized (targeted attack). Adversarial training is a strong defense baseline against a fixed attack which updates using an attacked image x instead of the clean image x at each training iteration. We consider 8 attacks: L ∞ , L 2 , L 1 , Elastic , JPEG, Fog, Gabor, and Snow. We show sample attacked images in Figure 1 and the corresponding distortions in Figure 2. The JPEG, Fog, L∞ (4.1m, 11.1k, 32) L2 (1.3m, 4.8k, 99) L1 (224k, 2.6k, 218) Elastic (3.1m, 15.2k, 253) New Attacks JPEG (5.4m, 18.7k, 255) Fog (4.1m, 13.2k, 89) Gabor (3.7m, 13.3k, 50) Snow (11.3m, 32.0k, 255) Figure 2: Scaled pixel-level differences between original and attacked images for each attack (label "espresso maker"). The L 1, L 2, and L ∞ norms of the difference are shown after the attack name. Our novel attacks display behavior which is qualitatively different from that of the L p attacks. Attacked images are shown in Figure 1, and unscaled differences are shown in Figure 9, Appendix B.1. Gabor, and Snow attacks are new to this paper, and the L 1 attack uses the Frank-Wolfe algorithm to improve on previous L 1 attacks. We now describe the attacks, whose distortion sizes are controlled by a parameter ε. We clamp output pixel values to. Existing attacks. The L p attacks with p ∈ {1, 2, ∞} modify an image x to an attacked image x = x+δ. We optimize δ under the constraint δ p ≤ ε, where · p is the L p -norm on R 3×224×224. The Elastic attack warps the image by allowing distortions x = Flow(x, V), where V: {1, . . ., 224} 2 → R 2 is a vector field on pixel space, and Flow sets the value of pixel (i, j) to the bilinearly interpolated original value at (i, j) + V (i, j). We construct V by smoothing a vector field W by a Gaussian kernel (size 25 × 25, std. dev. 3 for a 224 × 224 image) and optimize W under W (i, j) ∞ ≤ ε for all i, j. This differs in details from but is similar in spirit. Novel attacks. As discussed in for defense, JPEG compression applies a lossy linear transformation JPEG based on the discrete cosine transform to image space, followed by quantization. The JPEG attack imposes the L ∞ -constraint JPEG(x) − JPEG(x) ∞ ≤ ε on the attacked image x. We optimize z = JPEG(x) and apply a right inverse of JPEG to obtain x. Initialization Optimized Figure 3: Snow before and after optimization. Our novel Fog, Gabor, and Snow attacks are adversarial versions of non-adversarial distortions proposed in the literature. Fog and Snow introduce adversarially chosen partial occlusions of the image resembling the effect of mist and snowflakes, respectively; stochastic versions of Fog and Snow appeared in. Gabor superimposes adversarially chosen additive Gabor noise onto the image; a stochastic version appeared in. These attacks work by optimizing a set of parameters controlling the distortion over an L ∞ -bounded set. Specifically, values for the diamond-square algorithm, sparse noise, and snowflake brightness (Figure 3) are chosen adversarially for Fog, Gabor, and Snow, respectively. Optimization. To handle L ∞ and L 2 constraints, we use randomly-initialized projected gradient descent (PGD), which optimizes the distortion δ by gradient descent and projection to the L ∞ and L 2 balls . For L 1 constraints, this projection is more difficult, and previous L 1 attacks resort to heuristics (; Tramèr &). We use the randomly- Figure 4: Accuracies of L 2 and Elastic attacks at different distortion sizes against a ResNet-50 model adversarially trained against L 2 at ε = 9600 on ImageNet-100. At small distortion sizes, the model appears to defend well against Elastic, but large distortion sizes reveal a lack of transfer. initialized Frank-Wolfe algorithm , which replaces projection by a simpler optimization of a linear function at each step (pseudocode in Appendix B.2). We now propose a method to assess robustness against unforeseen distortions, which relies on evaluating a defense against a diverse set of attacks that were not used when designing the defense. Our method must address the following issues: • The range of distortion sizes must be wide enough to avoid the misleading behavior in which robustness appears to transfer at low distortion sizes but not at high distortion sizes (Figure 4); • The set of attacks considered must be sufficiently diverse. We first provide a method to calibrate distortion sizes and then use it to define a summary metric that assesses the robustness of a defense against a specific unforeseen attack. Using this metric, we are able to assess diversity and recommend a set of attacks to evaluate against. Calibrate distortion size using adversarial training. As shown in Figure 4, the correlation between adversarial robustness against different distortion types may look different for different ranges of distortion sizes. It is therefore critical to evaluate on a wide enough range of distortion size ε. We choose the minimum and maximum distortion sizes ε using the following principles; sample images at ε min and ε max are shown in Figure 5b. 1. The minimum distortion size ε min is the largest ε for which the adversarial validation accuracy against an adversarially trained model is comparable to that of a model trained and evaluated on unattacked data (for ImageNet-100, within 3 of 87). 2. The maximum distortion size ε max is the smallest ε which either (a) yields images which confuse humans when applied against adversarially trained models or (b) reduces accuracy of adversarially trained models (ATA below) to below 25. In practice, we select ε min and ε max according to these criteria from a sequence of ε which is geometrically increasing with ratio 2. We choose to evaluate against adversarially trained models because attacking against strong defenses is necessary to produce strong visual distortions (Figure 5a). We introduce the constraint that humans recognize attacked images at ε max because we find cases for L 1, Fog, and Snow where adversarially trained models maintain non-zero accuracy for distortion sizes producing images incomprehensible to humans. An example for Snow is shown in Figure 5b. UAR: an adversarial robustness metric. We measure a model's robustness against a specific distortion type by comparing it to adversarially trained models, which represent an approximate ceiling on performance with prior knowledge of the distortion type. For distortion type A and size ε, let the Adversarial Training Accuracy ATA(A, ε) be the best adversarial accuracy on the test set that can be achieved by adversarially training a specific architecture (ResNet-50 for ImageNet-100, ResNet-56 for CIFAR-10) against A. other than ResNet-50 or ResNet-56, we recommend using the ATA values computed with these architectures to allow for uniform comparisons. Given a set of distortion sizes {ε 1, . . ., ε n}, we propose the summary metric UAR (Unforeseen Attack Robustness) normalizing the accuracy of a model M against adversarial training accuracy: Here Acc(A, ε, M) is the accuracy of M against distortions of type A and magnitude ε. We expect most UAR scores to be lower than 100 against held-out distortion types, as an UAR score greater than 100 means that a defense is outperforming an adversarially trained model on that distortion. The normalizing factor in is required to keep UAR scores roughly comparable between distortions, as different distortions can have different strengths as measured by ATA at the chosen distortion sizes. Having too many or too few ε k values in a certain range may cause an attack to appear artificially strong or weak because the functional relation between distortion size and attack strength (measured by ATA) varies between attacks. To make UAR roughly comparable between distortions, we evaluate at ε increasing geometrically from ε min to ε max by factors of 2 and take the subset of ε whose ATA values have minimum 1 -distance to the ATA values of the L ∞ attack at geometrically increasing ε. For example, when calibrating Elastic in Table 1, we start with ε min = 0.25 and ε max = 16 based on our earlier criteria. We then compute the ATAs at the 7 geometrically increasing ε values ε ∈ {0.25, 0.5, 1, 2, 4, 8, 16}. We consider size-6 subsets of those ATA values, view them as vectors of length 6 in decreasing order, and compute the 1 -distance between these vectors and the vector for L ∞ shown in the first row of Table 1. Finally, we select the ε values for Elastic in Table 1 as those corresponding to the size-6 subset with minimum 1 -distance to the vector for L ∞. For our 8 distortion types, we provide reference values of ATA(A, ε) on this calibrated range of 6 distortion sizes on ImageNet-100 (Table 1, §4) and CIFAR-10 (Table 3, Appendix C.3.2). This allows UAR computation for a new defense using 6 adversarial evaluations and no adversarial training, reducing computational cost from 192+ to 6 NVIDIA V100 GPU-hours on ImageNet-100. Evaluate against diverse distortion types. Since robustness against different distortion types may have low or no correlation (Figure 6b), measuring performance on different distortions is important to avoid overfitting to a specific type, especially when a defense is constructed with it in mind (as with adversarial training). Our in §4 demonstrate that choosing appropriate distortion types to evaluate against requires some care, as distortions such as L 1, L 2, and L ∞ that may seem different can actually have highly correlated scores against defenses (see Figure 6). We instead recommend evaluation against our more diverse attacks, taking the L ∞, L 1, Elastic, Fog, and Snow attacks as a starting point. We apply our methodology to the 8 attacks in §2 using models adversarially trained against these attacks. Our reveal that evaluating against the commonly used L p -attacks gives highly correlated information which does not generalize to other unforeseen attacks. Instead, they suggest that evaluating on diverse attacks is necessary and identify a set of 5 attacks with low pairwise robustness transfer which we suggest as a starting point when assessing robustness to unforeseen adversaries. Dataset and model. We use two datasets: CIFAR-10 and ImageNet-100, the 100-class subset of ImageNet-1K containing every 10 th class by WordNet ID order. We use ResNet-56 for CIFAR-10 and ResNet-50 as implemented in torchvision for ImageNet-100 . We give training hyperparameters in Appendix A. Adversarial training and evaluation procedure. We construct hardened models using adversarial training . To train against attack A, for each mini-batch of training images, we select a uniform random (incorrect) target class for each image. For maximum distortion size ε, we apply the targeted attack A to the current model with distortion size ε ∼ Uniform(0, ε) and update the model with a step of stochastic gradient descent using only the ing adversarial images (no clean images). The random size scaling improves performance especially against smaller distortions. We use 10 optimization steps for all attacks during training except for Elastic, where we use 30 steps due to its more difficult optimization problem. When PGD is used, we use step size ε/ √ steps, the optimal scaling for non-smooth convex functions (; 1983). We adversarially train 87 models against the 8 attacks from §2 at the distortion sizes described in §3 and evaluate them on the ImageNet-100 and CIFAR-10 validation sets against 200-step targeted attacks with uniform random (incorrect) target class. This uses more steps for evaluation than train- Fog ε = 8192 Gabor ε = 3200 (a) UAR scores for adv. trained defenses (rows) against attacks (columns) on ImageNet-100. See Figure 12 for more ε values and Appendix C.3.2 for CIFAR-10 . ing per best practices . We use UAR to analyze the in the remainder of this section, directing the reader to Figures 10 and 11 (Appendix C.2) for exhaustive and to Appendix D for checks for robustness to random seed and number of attack steps. Existing defense and evaluation methods do not generalize to unforeseen attacks. The many low off-diagonal UAR scores in Figure 6a make clear that while adversarial training is a strong baseline against a fixed distortion, it only rarely confers robustness to unforeseen distortions. Notably, we were not able to achieve a high UAR against Fog except by directly adversarially training against it. Despite the general lack of transfer in Figure 6a, the fairly strong transfer between the L p -attacks is consistent with recent progress in simultaneous robustness to them . Figure 6b shows correlations between UAR scores of pairs of attacks A and A against defenses adversarially trained without knowledge 3 of A or A. The demonstrate that defenses trained without knowledge of L p -attacks have highly correlated UAR scores against the different L p attacks, but this correlation does not extend to their evaluations against other attacks. This suggests that L pevaluations offer limited diversity and may not generalize to other unforeseen attacks. The L ∞, L 1, Elastic, Fog, and Snow attacks offer greater diversity. Our on L p -evaluation suggest that more diverse attack evaluation is necessary for generalization to unforeseen attacks. As the unexpected correlation between UAR scores against the pairs (Fog, Gabor) and (JPEG, L 1) in Figure 6b demonstrates, even attacks with very different distortions may have correlated behaviors. Considering all attacks in Figure 6 together in signficantly more diversity, which we suggest for evaluation against unforeseen attacks. We suggest the 5 attacks (L ∞, L 1, Elastic, Fog, and Snow) with low UAR against each other and low correlation between UAR scores as a good starting point. A natural idea to improve robustness against unforeseen adversaries is to adversarially train the same model against two different types of distortions simultaneously, with the idea that this will cover a larger portion of the space of distortions. We refer to this as joint adversarial training (; Tramèr &). For two attacks A and A, at each training step, we compute the attacked image under both A and A and backpropagate with respect to gradients induced by the image with greater loss. This corresponds to the "max" loss described in Tramèr &. We jointly train models for (L ∞, L 2), (L ∞, L 1), and (L ∞, Elastic) using the same setup as before Normal Training L∞ ε = 16, L1 ε = 612000 Normal Training Transfer for jointly trained models. Figure 7 reports UAR scores for jointly trained models using ResNet-50 on ImageNet-100; full evaluation accuracies are in Figure 19 (Appendix E). Comparing to Figure 6a and Figure 12 (Appendix E), we see that, relative to training against only L 2, joint training against (L ∞, L 2) slightly improves robustness against L 1 without harming robustness against other attacks. In contrast, training against (L ∞, L 1) is worse than either training against L 1 or L ∞ separately (except at small ε for L 1). Training against (L ∞, Elastic) also performs poorly. Joint training and overfitting. Jointly trained models achieve high training accuracy but poor validation accuracy (Figure 8) that fluctuates substantially for different random seeds (Table 4, Appendix E.2). Figure 8 shows the overfitting behavior for (L ∞, Elastic): L ∞ validation accuracy decreases significantly during training while training accuracy increases. This contrasts with standard adversarial training (Figure 8), where validation accuracy levels off as training accuracy increases. Overfitting primarily occurs when training against large distortions. We successfully trained against the (L ∞, L 1) and (L ∞, Elastic) pairs for small distortion sizes with accuracies comparable to but slightly lower than observed in Figure 11 for training against each attack individually (Figure 18, Appendix E). This agrees with behavior reported by Tramèr & on CIFAR-10. Our intuition is that harder training tasks (more diverse distortion types, larger ε) make overfitting more likely. We briefly investigate the relation between overfitting and model capacity in Appendix E.3; validation accuracy appears slightly increased for ResNet-101, but overfitting remains. We have seen that robustness to one attack provides limited information about robustness to other attacks, and moreover that adversarial training provides limited robustness to unforeseen attacks. These suggest a need to modify or move beyond adversarial training. While joint adversarial training is one possible alternative, our show it often leads to overfitting. Even ignoring this, it is not clear that joint training would confer robustness to attacks outside of those trained against. Evaluating robustness has proven difficult, necessitating detailed study of best practices even for a single fixed attack . We build on these best practices by showing how to choose and calibrate a diverse set of unforeseen attacks. Our work is a supplement to existing practices, not a replacement-we strongly recommend following the guidelines in and in addition to our recommendations. Some caution is necessary when interpreting specific numeric in our paper. Many previous implementations of adversarial training fell prone to gradient masking , with apparently successful training occurring only recently . While evaluating with moderately many PGD steps helps guard against this, shows that an L ∞ -trained model that appeared robust against L 2 actually had substantially less robustness when evaluating with 10 6 PGD steps. If this effect is pervasive, then there may be even less transfer between attacks than our current suggest. For evaluating against a fixed attack, and can be seen as existing alternatives to UAR. They work by estimating "empirical robustness", which is the expected minimum ε needed to successfully attack an image. However, these apply only to attacks which optimize over an L p -ball of radius ε, and CLEVER can be susceptible to gradient masking. In addition, empirical robustness is equivalent to linearly averaging accuracy over ε, which has smaller dynamic range than the geometric average in UAR. Our add to a growing line of evidence that evaluating against a single known attack type provides a misleading picture of the robustness of a model (; ; ; Tramèr & ;). Going one step further, we believe that robustness itself provides only a narrow window into model behavior; in addition to robustness, we should seek to build a diverse toolbox for understanding machine learning models, including visualization , disentanglement of relevant features , and measurement of extrapolation to different datasets or the long tail of natural but unusual inputs . Together, these windows into model behavior can give us a clearer picture of how to make models reliable in the real world. For ImageNet-100, we trained on machines with 8 NVIDIA V100 GPUs using standard data augmentation. Following best practices for multi-GPU training , we ran synchronized SGD for 90 epochs with batch size 32×8 and a learning rate schedule with 5 "warm-up" epochs and a decay at epochs 30, 60, and 80 by a factor of 10. Initial learning rate after warm-up was 0.1, momentum was 0.9, and weight decay was 10 −4. For CIFAR-10, we trained on a single NVIDIA V100 GPU for 200 epochs with batch size 32, initial learning rate 0.1, momentum 0.9, and weight decay 10 −4. We decayed the learning rate at epochs 100 and 150. We show the images corresponding to the ones in Figure 2, with the exception that they are not scaled. The non-scaled images are shown in Figure 9. We chose to use the Frank-Wolfe algorithm for optimizing the L 1 attack, as Projected Gradient Descent would require projecting onto a truncated L 1 ball, which is a complicated operation. In contrast, Frank-Wolfe only requires optimizing linear functions g x over a truncated L 1 ball; this can be done by sorting coordinates by the magnitude of g and moving the top k coordinates to the boundary of their range (with k chosen by binary search). This is detailed in Algorithm 1. We will present with two additional versions of the JPEG attack which impose L 1 or L 2 constraints on the attack in JPEG-space instead of the L ∞ constraint discussed in Section 2. To avoid confusion, in this appendix, we denote the original JPEG attack by L ∞ -JPEG and these variants by New Attacks JPEG (5.4m, 18.7k, 255) Fog (4.1m, 13.2k, 89) Gabor (3.7m, 13.3k, 50) Snow (11.4m, 32.0k, 255) Figure 9: Differences of the attacked images and original image for different attacks (label "espresso maker"). The L 1, L 2, and L ∞ norms of the difference are shown in parentheses. As shown, our novel attacks display qualitatively different behavior and do not fall under the L p threat model. These differences are not scaled and are normalized so that no difference corresponds to white. Algorithm 1 Pseudocode for the Frank-Wolfe algorithm for the L 1 attack. s k ← index of the coordinate of g by with k th largest norm 9: end for 10: S k ← {s 1, . . ., s k}. 12: else 16: end if end for 19: 20: 21: Averagex with previous iterates 31: end for 32:x ← x (T) we find that they have extremely similar , so we omit L 1 -JPEG in the full analysis for brevity and visibility. Calibration values for these attacks are shown in Table 2. We show the full of all adversarial attacks against all adversarial defenses for ImageNet-100 in Figure 11. As described, the L p attacks and defenses give highly correlated information on heldout defenses and attacks respectively. Thus, we recommend evaluating on a wide range of distortion types. Full UAR scores are also provided for ImageNet-100 in Figure 12. We further show selected in Figure 13. As shown, a wide range of ε is required to see the full behavior. Attack (adversarial training) Normal training Gabor ε = 6.25 Gabor ε = 12.5 Gabor ε = 25 Gabor ε = 400 Gabor ε = 800 Gabor ε = 1600 Figure 13: Adversarial accuracies of attacks on adversarially trained models for different distortion sizes on ImageNet-100. For a given attack ε, the best ε to train against satisfies ε > ε because the random scaling of ε during adversarial training ensures that a typical distortion during adversarial training has size smaller than ε. We show the of adversarial attacks and defenses for CIFAR-10 in Figure 14. We experienced difficulty training the L 2 and L 1 attacks at distortion sizes greater than those shown and have omitted those runs, which we believe may be related to the small size of CIFAR-10 images. The ε calibration procedure for CIFAR-10 was similar to that used for ImageNet-100. We started with the perceptually small ε min values in Table 3 and increased ε geometrically with ratio 2 until adversarial accuracy of an adversarially trained model dropped below 40. Note that this threshold Table 3 and Figure 15. We omitted calibration for the L 2 -JPEG attack because we chose too small a range of ε for our initial training experiments, and we plan to address this issue in the future. We replicated our for the first three rows of Figure 11 with different random seeds to see the variation in our . As shown in Figure 16, deviations in are minor. We replicated the in Figure 11 with 50 instead of 200 steps to see how the changed based on the number of steps in the attack. As shown in Figure 17, the deviations are minor. We show the evaluation accuracies of jointly trained models in Figure 18. We show all the attacks against the jointly adversarially trained defenses in Figure 19. In Table 4, we study the dependence of joint adversarial training to random seed. We find that at large distortion sizes, joint training for certain pairs of distortions does not produce consistent over different random initializations. Table 4: Train and val accuracies for joint adversarial training at large distortion are dependent on seed. For train and val, ε is chosen uniformly at random between 0 and ε, and we used 10 steps for L ∞ and L 1 and 30 steps for elastic. Single adversarial training baselines are also shown. As a first test to understand the relationship between model capacity and overfitting, we trained ResNet-101 models using the same procedure as in Section 5. Briefly, overfitting still occurs, but ResNet-101 achieves a few percentage points higher than ResNet-50. We show the training curves in Figure 20 and the training and validation numbers in Table 5. "Common" visual corruptions such as (non-adversarial) fog, blur, or pixelation have emerged as another avenue for measuring the robustness of computer vision models (; ;). Recent work suggests that robustness to such common corruptions is linked to adversarial robustness and proposes corruption robustness as an easily computed indicator of adversarial robustness . We consider this alternative to our methodology by testing corruption robustness of our models on the ImageNet-C benchmark. Experimental setup. We evaluate on the 100-class subset of the corruption robustness benchmark ImageNet-C introduced in with the same classes as ImageNet-100, which we call ImageNet-C-100. It is the ImageNet-100 validation set with 19 common corruptions at 5 severities. We use the JPEG files available at https://github.com/hendrycks/ robustness. We show average accuracies by distortion type in Figure 21. Adversarial training against small distortions increases corruption robustness. The first column of each block in Figure 21 shows that training against small adversarial distortions generally increases average accuracy compared to an undefended model. However, training against larger distortions often decreases average accuracy, largely due to the ing decrease in clean accuracy. Adversarial distortions and common corruptions can affect defenses differently. Our L p -JPEG and elastic attacks are adversarial versions of the corresponding common corruptions. While training against adversarial JPEG at larger ε improves robustness against adversarial JPEG attacks (Figure 12 in Appendix C.2), Figure 21 shows that robustness against common JPEG corruptions decreases as we adversarially train against JPEG at larger ε, though it remains better than for normally trained models. Similarly, adversarial Elastic training at large ε begins to hurt robustness to its common counterpart. This is likely because common corruptions are easier than adversarial distortions, hence the increased robustness does not make up for the decreased clean accuracy. We show sample images of our attacks against undefended models trained in the normal way in Figure 22. Normal training Elastic ε = 0.125 Figure 15: UAR scores on CIFAR-10. Displayed UAR scores are multiplied by 100 for clarity. Attack (evaluation) L2 ε = 150 L2 ε = 300 L2 ε = 600 L2 ε = 1200 L2 ε = 2400 L2 ε = 4800 Normal training B ri g h tn e ss C o n tr a st E la st ic P ix e la te J P E G S p e c k le N o is e G a u ss ia n B lu r S p a tt e r S a tu ra te Normal training L∞ ε = 1 L∞ ε = 2 L∞ ε = 4 L∞ ε = 8 L∞ ε = 16 L∞ ε = 32 L2 ε = 150 L2 ε = 300 L2 ε = 600 L2 ε = 1200 L2 ε = 2400 L2 ε = 4800 L1 ε = 9562.44 L1 ε = 19125 L1 ε = 76500 L1 ε = 153000 L1 ε = 306000 L1 ε = 612000 | We propose several new attacks and a methodology to measure robustness against unforeseen adversarial attacks. | 638 | scitldr |
Deep neural networks (DNNs) have witnessed as a powerful approach in this year by solving long-standing Artificial intelligence (AI) supervised and unsupervised tasks exists in natural language processing, speech processing, computer vision and others. In this paper, we attempt to apply DNNs on three different cyber security use cases: Android malware classification, incident detection and fraud detection. The data set of each use case contains real known benign and malicious activities samples. These use cases are part of Cybersecurity Data Mining Competition (CDMC) 2017. The efficient network architecture for DNNs is chosen by conducting various trails of experiments for network parameters and network structures. The experiments of such chosen efficient configurations of DNNs are run up to 1000 epochs with learning rate set in the range [0.01-0.5]. Experiments of DNNs performed well in comparison to the classical machine learning algorithm in all cases of experiments of cyber security use cases. This is due to the fact that DNNs implicitly extract and build better features, identifies the characteristics of the data that lead to better accuracy. The best accuracy obtained by DNNs and XGBoost on Android malware classification 0.940 and 0.741, incident detection 1.00 and 0.997, and fraud detection 0.972 and 0.916 respectively. The accuracy obtained by DNNs varies -0.05%, +0.02%, -0.01% from the top scored system in CDMC 2017 tasks. In this era of technical modernization, explosion of new opportunities and efficient potential resources for organizations have emerged but at the same time these technologies have ed in threats to the economy. In such a scenario proper security measures plays a major role. Now days, hacking has become a common practice in organizations in order to steal data and information. This highlights the need for an efficient system to detect and prevent the fraudulent activities. cyber security is all about the protection of systems, networks and data in the cyberspace. Malware remains one of the maximum enormous security threats on the Internet. Malware are the softwares which indicate malicious activity of the file or programs. These are unwanted programs since they cause harm to the intended use of the system by making it behave in a very different manner than it is supposed to behave. Solutions with Antivirus and blacklists are used as the primary weapons of resistance against these malwares. Both approaches are not effective. This can only be used as an initial shelter in real time malware detection system. This is primarily due to the fact that both approaches completely fails in detecting the new malware that is created using polymorphic, metamorphic, domain flux and IP flux. Machine learning algorithms have played a pivotal role in several use cases of cyber security BID0. Fortunately, deep learning approaches are prevailing subject in recent days due to the remarkable performance in various long-standing artificial intelligence (AI) supervised and unsupervised challenges BID1. This paper evaluates the effectiveness of deep neural networks (DNNs) for cyber security use cases: Android malware classification, incident detection and fraud detection. The paper is structured as follows. Section II discusses the related work. Section III discusses the knowledge of deep neural networks (DNNs). Section IV presents the proposed methodology including the description of the data set. Results are displayed in Section V. Conclusion is placed in Section VI. This section discusses the related work for cyber security use cases: Android malware classification, incident detection and fraud detection. Static and dynamic analysis is the most commonly used approaches in Android malware detection BID2. In static analysis, android permissions are collected by unpacking or disassembling the app. In dynamic analysis, the run-time execution characteristics such as system calls, network connections, power consumption, user interactions and memory utilization. Mostly, commercial systems use combination of both the static and dynamic analysis. In Android devices, static analysis is preferred due to the following advantageous such as less computational cost, low resource utilization, light-weight and less time consuming. However, dynamic analysis has the capability to detect the metamorphic and polymorphic malwares. In BID3 evaluated the performance of traditional machine learning classifiers for android malware detection with using the permission, API calls and combination of both the API calls and permission as features. These 3 different feature sets were collected from the 2510 APK files. All traditional machine learning classifiers performance is good with combination of API calls and permission feature set in comparison to the API calls as well as permission. BID4 proposed MalDozer that use sequences of API calls with deep learning to detect Android malware and classify them to their corresponding family. The system has performed well in both private and public data sets, Malgenome, Drebin. Recently, the privacy and security for cloud computing is briefly discussed by BID5. The discussed various 28 cloud security issues and categorized those issues into five major categories. BID6 proposed machine learning based anomaly detection that acts on different layers e.g. the network, the service, or the workflow layers. BID7 discussed the issues in creating the intrusion detection for the cloud infrastructure. Also, how rule based and machine learning based system can be combined as hybrid system is shown. BID8 discussed the security problems in cloud and proposed incident detection system. They showed how incident detection system can perform well in comparison to the intrusion detection. In BID9 did comparative study of six different traditional machine learning classifiers in identifying the financial fraud. In BID10 discussed the applicability of data mining approaches for financial fraud detection. Deep learning is a sub model of machine learning technique largely used by researchers in recent days. This has been applied for various cyber security use cases BID11, BID12, BID13, BID14, BID15, BID16, BID17, BID18. Following, this paper proposes a unique DNN architecture which works efficiently on various cyber security use cases. The purpose of this section is to discuss the concepts of deep neural networks (DNNs) architecture concisely and promising techniques behind to train DNNs. Artificial neural networks (ANNs) represent a directed graph in which a set of artificial neuron generally called as units in mathematical model that are connected together with edges. This influenced by the characteristics of biological neural networks, where nodes represent biological neurons and edges represent synapses. A feed forward network is a type of ANNs. A feed forward network (FFN) consists of a set of units that are connected together with edges in a single direction without formation of a cycle. They are simple and most commonly used algorithm. Multi-layer perceptron (MLP) is a subset of FFN that consist of 3 or more layers with a number of artificial neurons, termed as units. The 3 layers are input layer, a hidden layer and output layer. There is a possibility to increase the number of hidden layers when the data is complex in nature. So, the number of hidden layer is parameterized and relies on the complexity of the data. These units together form an acyclic graph that passes information or signals in forward direction from layer to layer without the dependence of past input. MLP can be written as O: R p × R q where p and q are the size of the input vector x = x 1, x 2, · · ·, x p−1, x p and output vector O(x) respectively. The computation of each hidden layer Hl i can be mathematically formulated as follows. DISPLAYFORM0 DISPLAYFORM1 DISPLAYFORM2 DISPLAYFORM3 When then network consist of l hidden layers then the combined representation of them can be generally defined as, Rectified linear units (ReLU) have been turned out to be more proficient and are capable of accelerating the entire training process altogether BID19. Selecting ReLU is a more efficient way when considering the time cost of training the vast amount of data. The reason being that not only does it substantially speeds up the training process but also possesses some advantages when comparing to the traditional activation function including logistic sigmoid function and hyperbolic tangent function BID20. We refer to neurons with this nonlinearity following BID21. DISPLAYFORM4 We consider TensorFlow BID22 in conjunction with Keras BID23 as software framework. To increase the speed of gradient descent computations of deep learning architectures, we use with GPU enabled TensorFlow in single NVidia GK110BGL Tesla k40. All deep learning architectures are trained using the back propagation through time (BPTT) technique. Task 1 (Android Malware Classification): This data set includes 37,107 unique API information from 61,730 APK files BID24. These APK (application package) files were collected from the Opera Mobile Store over the period of January to September of 2014. When a user runs an application, a set of APIs will be called. Each API is related to a particular permission. The execution of the API may solely achieve success within the case that the permission is granted by the user. These permissions are grouped into Normal, Dangerous, Signature and Signature Or System in Android. These permissions are explicitly mentioned in the AndroidManifest.xml file of APK by application developers. Task 2 (Incident Detection): This dataset contains operational log file that was captured from Unified Threat Management (UTM) of UniteCloud BID25. UniteCloud uses resilient private cloud infrastructure to supply e-learning and e-research services for tertiary students and staffs in New Zealand. Unified Threat Management is a rule based real-time running system for UniteCloud server. Each sample of a log file contains nine features. These features are operational measurements of 9 different sensors in UTM system. Each sample is labeled based on the knowledge related to the incident status of the log samples. Task 3 (Fraud Detection): This dataset is anonymised data that was unified using the highly correlated rule based uniformly distributed synthetic data (HCRUD) approach by considering similar distribution of features BID26. The detailed statistics of Task 1, Task 2 and Task 3 data sets are reported in TAB0 In order to find an optimal learning rate, we run two trails of experiment till 500 epochs with learning rate varying in the range [0.01-0.5]. The highest 10-fold cross validation accuracy was obtained by using the learning rate of 0.1. There was a sudden decrease in accuracy at learning rate 0.2 and finally attained highest accuracy at learning rates of 0.35, 0.45 and 0.45 in comparison to learning rate 0.1. This accuracy may have been enhanced by running the experiments till 1000 epochs. As more complex architectures we have experimented with, showed less performance within 500 epochs, we decided to use 0.1 as learning rate for the rest of the experiments after considering the factors of training time and computational cost. The following network topologies are used in order to find an optimum network structure for our input data. 1) DNN 1 layer 2) DNN 2 layer 3) DNN 3 layer 4) DNN 4 layer 5) DNN 5 layer For all the above network topologies, we run 2 trails of experiments. Each trail of experiment was run till 500 epochs. It was observed that most of the deep learning architectures learn the normal category patterns of input data within 600 epochs. The number of epochs required to learn the malicious category data usually varies. The complex architecture networks required large number of iterations in order to reach the best accuracy. Finally, we obtained the best performed network topology for each use case. For Task 2 and Task 3, 4 layer DNNs network performed well. For Task 1, the performance of 5 layer DNNs network is good in comparison to the 4 layer DNNs. We decided to use 5 layer DNNs network for the rest of the experiments. 10-fold cross validation accuracy of each DNNs network topology for all use cases is shown in TAB0. An intuitive overview of proposed DNNs architecture, Deep-Net for all use cases is shown in FIG0 This contains an input layer, 5 hidden layer and output layer. An input layer contains 4896 neurons for Task 1, 9 neurons for Task 2 and 12 neurons for Task 3. An output layer contains 2 neurons for Task 1, 3 neurons for Task 2 and 2 neurons for Task 3. The details about the structure and configuration details of proposed DNNs architecture is shown in TAB0. The units in input to hidden layer and hidden to output layer are fully connected. DNNs network is trained using the backpropogation mechanism BID1. The proposed deep neural network is composed of fully-connected layers, batch normalization layers and dropout layers. Fully-connected layers: The units in this layer have connection to every other unit in the succeeding layer. Thats why this layer is called as fullyconnected layer. Generally, these fully-connected layers map the data into high dimension. The more the dimensions the data has the more accurate the data will be in determining the accurate output. It uses ReLU as non-linear activation function. Batch Normalization and Regularization: To obviate over fitting and speed up DNNs model training, Dropout (0.01) BID27 and Batch Normalization BID28 was used in between fully-connected layers. A dropout removes neurons with their connections randomly. In our alternative architectures for Task 1, the deep networks could easily overfit the training data without regularization even when trained on large number samples. Classification: For classification, the final fully connected layer follows sigmoid activation function for Task 1 and Task 2, sof tmax for Task 3. The fully connected layer absorb the non-linear kernel and sigmoid layer output 0 (benign) and 1 (malicious), sof tmax provides the probability score for each class. The prediction loss for Task 1 and Task 2 is estimated using binary cross entropy DISPLAYFORM0 where pd is a vector of predicted probability for all samples in testing data set, ed is a vector of expected class label, values are either 0 or 1.The prediction loss for Task 3 is estimated using categorical-cross entropy DISPLAYFORM1 where ed is true probability distribution, pd is predicted probability distribution. We have used sgd as an optimizer to minimize the loss of binary-cross entropy and categorical-cross entropy. We evaluate proposed DNNs model against classical machine learning classifier, on three different cyber security use cases. The first use case is identifying Android malware based on API information, the second use case is incident detection over unified threat management (UTM) operation on UniteCloud and the third use case is fraud detection in financial transactions. During training, we pass matrix of shape 30897*4896 for Task 1, 70000*9 for Task 2 and 70000*9 for Task 3 to the input layer of DNNs. These inputs are passed to more than one hidden layer (specifically 5) and output layer contains 1 neuron for Task 1 and Task 2, 3 neurons for Task TAB0 XGBoost is short for Extreme Gradient Boosting, where the term Gradient Boosting is proposed in the paper Greedy Function Approximation BID29. XGBoost is based on this original model. XGBoost is used for the given supervised learning problems (Task1, Task2 and Task3), where we use the training data (with multiple features) to predict a target variable. Here "multi:softmax" is used to perform the classification. After the observation and experiment, "max depth" of the tree set it as 20. 10 fold cross validation is performed to observe the training accuracy. Except Task 1, data are loaded as it is using Pandas 1. The "NaN" values are replaced with 0. In Task 1 the data is represented as a term -document matrix, where the vocabulary built using the API indication numbers in train and test. The scikit-learn BID11 count vectorizer is used to develop the termdocument matrix. On the successive representation, the data are fed to the XG Booster for prediction. The winner of CDMC 2017 tasks has acheived 0.9405, 0.9998 and 0.9824 on Task 1, Task 2 and Task 3 respectively using Random Forest classifier with Python scikit-learn BID11. The proposed method has performed well on Task 2 in comparision to the winner of CDMC 2017 and the accuracy obtained by DNNs varies -0.05%, -0.01% from the winner of CDMC 2017. The reported of DNNs can be further enhanced by simply adding hidden layers to the existing architecture that we are incompetent to try. Moreover, the proposed method can implicitly obtain the best features itself. This paper has evaluated the performance of deep neural networks (DNNs) for cyber security uses cases: Android malware classification, incident detection and fraud detection. Additionally, other classical machine learning classifier is used. In all cases, the performance of DNNs is good in comparison to the classical machine learning classifier. Moreover, the same architecture is able to perform better than the other classical machine learning classifier in all use cases. The reported of DNNs can be further improved by promoting training or stacking a few more layer to the existing architectures. This will be remained as one of the direction towards the future work. | Deep-Net: Deep Neural Network for Cyber Security Use Cases | 639 | scitldr |
In this paper, we present an approach to learn recomposable motor primitives across large-scale and diverse manipulation demonstrations. Current approaches to decomposing demonstrations into primitives often assume manually defined primitives and bypass the difficulty of discovering these primitives. On the other hand, approaches in primitive discovery put restrictive assumptions on the complexity of a primitive, which limit applicability to narrow tasks. Our approach attempts to circumvent these challenges by jointly learning both the underlying motor primitives and recomposing these primitives to form the original demonstration. Through constraints on both the parsimony of primitive decomposition and the simplicity of a given primitive, we are able to learn a diverse set of motor primitives, as well as a coherent latent representation for these primitives. We demonstrate, both qualitatively and quantitatively, that our learned primitives capture semantically meaningful aspects of a demonstration. This allows us to compose these primitives in a hierarchical reinforcement learning setup to efficiently solve robotic manipulation tasks like reaching and pushing. We have seen impressive progress over the recent years in learning based approaches to perform a plethora of manipulation tasks;; ). However, these systems are typically task-centric savants -able to only execute a single task that they were trained for. This is because these systems, whether leveraging demonstrations or environmental rewards, attempt to learn each task tabula rasa, where low to high level motor behaviours, are all acquired from scratch in context of the specified task. In contrast, we humans are adept at a variety of basic manipulation skills e.g. picking, pushing, grasping etc., and can effortlessly perform these diverse tasks via a unified manipulation system. Sample motor programs that emerge by discovering the space of motor programs from a diverse set of robot demonstration data in an unsupervised manner. These motor programs facilitate understanding the commonalities across various demonstrations, and accelerate learning for downstream tasks. How can we step-away from the paradigm of learning task-centric savants, and move towards building similar unified manipulation systems? We can begin by not treating these tasks independently, but via instead exploiting the commonalities across them. One such commonality relates to the primitive actions executed to accomplish the tasks -while the high-level semantics of tasks may differ significantly, the low and mid-level motor programs across them are often shared e.g. to either pick or push an object, one must move the hand towards it. This concept of motor programs can be traced back to the work of Lashley, who noted that human motor movements consist of'orderly sequences' that are not simply sequences of stimulus-response patterns. The term'motor programs' is however better attributed to as being representative of'muscle commands that execute a movement sequence uninfluenced by peripheral feedback', though later works shifted the focus from muscle commands to the movement itself, while allowing for some feedback . More directly relevant to our motivation is Schmidt's notion of'generalized' motor programs that can allow abstracting a class of movement patterns instead of a singular one. In this work, we present an approach to discover the shared space of (generalized) motor programs underlying a variety of tasks, and show that elements from this space can be composed to accomplish diverse tasks. Not only does this allow understanding the commonalities and shared structure across diverse skills, the discovered space of motor programs can provide a high-level abstraction using which new skills can be acquired quickly by simply learning the set of desired motor programs to compose. We are not the first to advocate the use of such mid-level primitives for efficient learning or generalization, and there have been several reincarnations of this idea over the decades, from'operators' in the classical STRIPS algorithm , to'options' or'primitives' in modern usage. These previous approaches however assume a set of manually defined/programmed primitives and therefore bypass the difficulty of discovering them. While some attempts have been made to simultaneously learn the desired skill and the underlying primitives, learning both from scratch is difficult, and are therefore restricted to narrow tasks. Towards overcoming this difficulty, we observe that instead of learning the primitives from scratch in the context of a specific task, we can instead discover them using demonstrations of a diverse set of tasks. Concretely, by leveraging demonstrations for different skills e.g. pouring, grasping, opening etc., we discover the motor programs (or movement primitives) that occur across these. We present an approach to discover movement primitives from a set of unstructured robot demonstration i.e. demonstrations without additional parsing or segmentation labels available. This is a challenging task as each demonstration is composed of a varying number of unknown primitives, and therefore the process of learning entails both, learning the space of primitives as well as understanding the available demonstrations in context of these. Our approach is based on the insight that an abstraction of a demonstrations into a sequence of motor programs or primitives, each of which correspond to an implied movement sequence, and must yield back the demonstration when the inferred primitives are'recomposed'. We build on this and formulate an unsupervised approach to jointly learn the space of movement primitives, as well as a parsing of the available demonstrations into a high-level sequence of these primitives. We demonstrate that our method allows us to learn a primitive space that captures the shared motions required across diverse skills, and that these motor programs can be adapted and composed to further perform specific tasks. Furthermore, we show that these motor programs are semantically meaningful, and can be recombined to solved robotic tasks using reinforcement learning. Specifically, solving reaching and pushing tasks with reinforcement learning over the space of primitives achieves 2 orders of magnitude faster training than reinforcement learning in the low-level control space. Our work is broadly related to several different lines of work which either learn task policies from demonstrations, or leverage known primitives for various applications, or learn primitives in context of known segments. While we discuss these relations in more detail below, we note that previous primitive based approaches either require: a) a known and fixed primitive space, or b) annotated segments each corresponding to a primitive. In contrast, we learn the space of primitives without requiring this segmentation annotation, and would like to emphasize that ours is the first work to do so for a diverse set of demonstrations spanning multiple tasks. Learning from Demonstration: The field of learning from demonstrations (LfD) has sought to learn to perform tasks from a set of demonstrated behaviors. A number of techniques exist to do so, including cloning the demonstrated behavior , fitting a parametric model to the demonstrations , or first segmenting the demonstrations and fitting a model to each of the ant segments (; ; ;). We point the reader to for a comprehensive overview of the field. Rather than directly learn to perform tasks, we use demonstrations to learn a diverse set of composable and reusable primitives, or motor-programs that may be used to perform a variety of downstream tasks. Learning and Sequencing Motion Primitives: Several works learn motion primitives from demonstrations using predetermined representations of skills such as Dynamic Movement Primitives (DMPs) . A few other works have approached the problem from an optimization perspective . Given these primitives, a question that arises is how to then sequence learned skills to perform downstream tasks. Several works have attempted to answer this question - builds a layered approach to adapt, select, and sequence DMPs.; segments demonstrations into sequences of skills, and also merge these skills into skill-trees. However, the predetermined representations of primitives adopted in these works can prove restrictive. In particular, it prevents learning arbitrarily expressive motions and adapting these motions to a generic downstream task. We seek to move away from these fixed representations of primitives, and instead learn representations of primitives along with the primitives themselves. The family of latent variable models (LVMs) provides us with fitting machinery to do so. Indeed, the success of LVMs in learning representations in deep learning has inspired several recent works (; ; ;) to learn latent representations of trajectories. SeCTAR builds a latent variable conditioned policy and model that are constrained to be consistent with one another, and uses the learned policies and model for hierarchical reinforcement learning. The CompILE framework seeks to learn variable length trajectory segments from demonstrations instead, and uses latent variables to represent these trajectory segments, but is evaluated in relatively low-dimensional domains. We adopt a similar perspective to these works and learn continuous latent variable representations (or abstractions) of trajectory segments. Hierarchical RL and the Options Framework: The related field of hierarchical reinforcement learning (HRL) learns a layering of policies that each abstract away details of control of the policies of a lower level. The Options framework also learns similar temporal abstractions over sequences of atomic actions. While promising, its application has traditionally been restricted to simple domains due to the difficulty of jointly learning internal option policies along with a policy over those options. Recent works have managed to do so with only a reward function as feedback. The Option-Critic framework employs a policy-gradient formulation of options to do so, while the Option-Gradient learns options in an off-policy manner. In contrast with most prior work in the options framework, (; learn options from a set of demonstrations, rather than in the RL setting. In similar spirit to these works, we too seek to learn abstractions from a given set of demonstrations, however unlike DDO, DDCO, and CompILE , we can learn primitives beyond a discrete set of options in a relatively high dimensional domain. Hierarchical representations of demonstrations: The idea of hierarchical task representations has permeated into LfD as well. In contrast to reasoning about demonstrations in a flat manner, one may also infer the hierarchical structure of tasks performed in demonstrations. A few recent works have striven to do so, by representing these tasks as programs , or as task graphs . and address generalizing to new instances of manipulation tasks in the low-shot regime by abstracting away low-level controls. The idea of policy sketches, i.e. a sketch of the sub-tasks to be accomplished in a particular task, has become popular . learn modular policies in the RL setting provided with such policy sketches. provides a modular LfD framework based on this idea of policy sketches. While all of these works address learning policies at various levels from demonstrations, unlike our approach, they each assume access to heavy supervision over demonstrations to do so. We seek to discover the space of motor programs directly from unstructured demonstrations in an unsupervised manner, and show that these can help understanding similarities across tasks as well as quickly quickly adapting to and solving new tasks. Building on ideas of and , we define a motor program M as a movement pattern that may be executed in and of itself, without access to sensory feedback. Concretely, a'movement sequence' or a motor program M is a sequence of robot joint configurations. Our goal is to learn the space of such movement patterns that Figure 2: An overview of our approach. Our abstraction network takes in an observed demonstration τ obs and predict a sequence of latent variables {z}. These {z} are each decoded into their corresponding motor programs via the motor program network. We finally recompose these motor programs into the recomposed trajectory. are present across a diverse set of tasks. We do so via learning a'motor program network' that maps elements z ∈ R n to corresponding movement sequences i.e. M: z −→ M. Given a set of N unlabelled demonstrations {τ i} i=1,2,...,N that consist of sequences of robot states {s 1, s 2, ..., s T} ∈ S, our aim is to learn the shared space of motor programs via the motor program network M. However, as each demonstration τ i is unannotated, we do not apriori know what subsequences of the demonstration correspond to a distinct motor program. Therefore, to learn motor programs from these demonstrations, we need to simultaneously learn to understand the demonstrated trajectories in terms of composition of motor programs. Thus in addition to learning the network M, we also learn a mapping A from each demonstrated trajectory τ i to the underlying sequence of motor programs {M 1, M 2, ..., M K} (and associated latent variables {z 1, z 2, ..., z K}) executed during the span of the trajectory. We call this mapping A: as it abstracts away the details of the trajectory into the set of motor programs (i.e., abstractions). Note that both the abstraction and motor program networks are learned using only a set of demonstrations from across diverse tasks. Our central insight is that we can jointly learn the space of motor programs and the abstraction of the demonstration by enforcing that the implied'recomposition' is faithful to the original demonstration. Concretely, an abstraction of a demonstration into a sequence of motor programs, each of which corresponds to an implied motion sequence, must yield back the original demonstration when the inferred motor programs are decoded and'recomposed'. We operationalize this insight to jointly train the motor program and abstraction networks from demonstrations. Learning Overview and Objective: Our approach is outlined in Fig 2, where given an input demonstration trajectory τ obs, we use the abstraction network to predict a sequence of (a variable number of) latent codes {z k}. These are each decoded into corresponding sub-trajectories via the learned motor program network M. Given the decoding of the predicted motor programs, we can recompose these sub-trajectories to obtain a recomposed demonstration trajectory τ rec, and penalize the discrepancy between the observed and the recomposed demonstrations. Denoting by ⊕ the concatenation operator, our loss for a given demonstration τ obs is therefore characterized as: As the trajectories τ obs, τ rec are possibly of different lengths, we use a pairwise matching cost between the two trajectories, where the optimal alignment is computed via dynamic time warping . This provides us with a more robust cost measure that handles different prediction lengths, is invariant to minor velocity perturbations, and enables the model to discard regions of inactivity in the demonstrations. Given two trajectories τ a ≡ (, and a distance metric δ over the state space, the discrepancy measure between trajectories can be defined as the matching cost for the optimal matching path P among all possible valid matching paths P (i.e. paths satisfying monotonicity, continuity, and boundary conditions ): As the recomposed trajectory comprises of distinct primitives, each of which implies a sequence of states, we sometimes observe discontinuities i.e. large state changes between the boundaries of these primitives. To prevent this, we additionally incorporate a smoothness loss L sm (τ rec) that penalizes the state change across consecutive time-steps if they are larger than a certain margin. Our overall objective, comprising of the reconstruction objective and the smoothness prior, can allow us to jointly learn the space of motor programs and the abstraction of trajectories in an unsupervised manner. Network Architecture and Implementation Details. We parameterize our motor program network M and our abstraction network A as neural networks. In particular, the motor program network is a 4 layer LSTM that takes a single 64 dimensional latent variable z as input, and predicts a sequence of 16 dimensional states. For our abstraction network, we adopt the Transformer architecture to take in a varying length 16 dimensional continuous joint angle trajectory τ as input, and predict a variable number of latent variables {z}, that correspond to the sequence of motor programs {M} executed during trajectory τ. We find the transformer architecture to be superior to LSTMs for processing long trajectories, due to its capacity to attend to parts of the trajectory as required. Our abstraction network A predicts a varying number of primitives by additionally predicting a'continuation probability' p k after each motor program variable z k. We then predict an additional primitive only if the sampled discrete variable from p k is 1, and therefore also need to learn the prediction of these probabilities. While the loss function above can directly yield gradients to the predicted motor program encoding z k via M, we use gradients using (with ∆(τ obs, τ rec) + L sm (τ rec) as negative reward) to learn prediction of p k. While the objective described so far can in principle allow us to jointly learn the space of motor programs and understand the demonstration trajectories as a composition of these, there are additional properties we would wish to enforce to the bias the learning towards more desirable solutions. As an example, our framework presented so far can allow a solution where each demonstration is a motor program by itself i.e. the abstraction network can learn to map each demonstration to a unique z, and the primitive decoder can then decode this back. However, this is not a suitable solution as the learned motor programs are not'simple'. On the other extreme, a very simple notion of a motor program is one that models each transition independently. However, this is again undesirable as this does not'abstract' away the details of control or represent the demonstration as a smaller number of motor programs. Therefore, in addition to enforcing that the learned motor programs recompose the demonstrations, we also need to enforce simplicity and parsimony of these motor programs. We incorporate these additional biases by adding priors in the objective or model space. To encourage the abstraction model to learn parsimonious abstractions of the input demonstrations, we penalize the number of motor primitives used to recompose the trajectory, by adding a small constant to the negative reward used to train the continuation probability p k if the corresponding sample yielded an additional primitive. To enforce simplicity of motor primitives, we observe that the trajectories yielded by a classical planner (in our case, RRT-Connect) e.g. to go from an initial to final state are'simple' and we therefore initialize the motor primitive network using the decoder of a pretrained autoencoder on random planner trajectories for (start, goal) state pairs. Note that this notion of'plannability' as'simplicity' is merely one plausible alternative, and alternate ones can be explored e.g.'linearity' or'predictability of motion' . We would like to ascertain how well our approach is capable of achieving our objective of successfully discovering and learning a good representation of the space of motor programs. Further, we seek to verify whether despite being learned in an unsupervised manner without semantic grounding, the learned primitive space is semantically meaningful. We would also like to evaluate how well they can be used to solve downstream RL tasks. We first provide a description of the data we wish to learn primitives from, followed by describing our quantitative and qualitative experiments towards verifying these three axes. Dataset: We use the MIME dataset to train and evaluate our model. The dataset consists of over 8000 kinesthetic demonstrations of 20 tasks (such as pouring, pushing, bottle opening, stacking objects, etc.) collected on a real-world Baxter Robot. While the dataset has head and hand-mounted RGBD data, we use the Baxter joint angle trajectories to train our model. We consider a 16 dimensional space as our input and prediction space, consisting of 7 joints for each of the 2 arms, along with a scalar value for each gripper (we ignore torso and head joints). We emphasize this is a higher dimensional domain than most other related works consider. The gripper values are re-scaled to a 0 − 1 range, while the joint angles are unnormalized. We temporally down-sample all joint angle data by a constant factor of 20 from the original data frequency of 100 Hz. We randomly sample a train set of 5900 demonstrations from all 20 tasks, with a validation set of 1600 trajectories, and a held-out test set of 850 trajectories. To help evaluate the learned motor programs, we manually annotate a set of 60 test trajectories (3 trajectories from each task) with temporal segmentation annotations, as well as semantic labels of 10 primitives (such as reaching, twisting, pushing, etc.) that occur in these 60 trajectories. Note that these labels are purely for evaluation, our model does not have access to these annotations during training. We would first like to evaluate the quality of the learned abstractions in and of themselves, i.e. Is our approach able to discover the space of motor programs, and learn a good representation of this space? We qualitatively answer this question by visualizing the latent representation space of motor primitives learned by our model. We first randomly sample a set of 500 trajectories unseen during training, then pass these trajectories through our model, and retrieve the predicted latent variables {z} for each of these trajectories and their corresponding movement sequences {τ}. We then embed the latent variables in a 2-dimensional space using T-SNE (van der), and visualize the corresponding movement sequences at their corresponding position in this 2-D embedded space, as in Fig. 3. We provide a GIF version of Fig. 3 (and other visualizations) at https://sites.google.com/view/discovering-motor-programs/home. We observe that clusters of movement sequences emerge in this embedded space based on the relative motions executed during the course of these trajectory segments (similar latent variables correspond to similar movement sequences and vice versa). While these clusters are not explicitly semantically labelled by our approach, the motions in these clusters correlate highly with traditional notions of skills in robot manipulation, i.e. reaching motions (top and top-left clusters), returning motions (bottom cluster), bi-manual motions (visible to the bottom right of the left most cluster and the bottom of the right most cluster), etc. We visualize a few such primitives (reaching, twisting, grasping, bi-manual pulling, etc.) among these to the right of Fig. 3. This shows our model learns smooth mappings M and A, and is capable of discovering such primitives in an unsupervised manner without explicit temporal segmentation or semantic labels, which we believe is an encouraging . Interestingly, the model learns abstractions that pick up on the trend of the motion, rather than distinguishing between whether the left or right hand is used for the motion. This is particularly notable in the case of reaching and returning motions, where both left and right-handed reaching and returning motions appear alongside each other in their respective clusters in the embedded space. We would ideally like the learned primitives from our model to be useful on a real Baxter robot platform, and be suitably smooth, feasible, and correspond to the motions executed in simulation (i.e. be largely unaffected by the noise of execution on a real robot). To verify whether our model is indeed able to learn such primitives, we execute a small set of learned primitives on a real Baxter robot, by feeding in the trajectory predicted by the model into a simple position controller. We visualize the of this execution in Fig. 4, (see project webpage for videos). Despite not explicitly optimizing for feasibility or transfer to a real robot, the use of real-world Baxter data to train our model heavily biases the model towards primitives that are inherently feasible, relatively smooth and can be executed on a real robot without any subsequent modifications. As the'recomposed' trajectory can be aligned to the original demonstration via sequence alignment, our predicted abstractions induce a partitioning of the demonstrated trajectory (corresponding to the aligned boundaries of the predicted primitives). We test whether the predicted abstraction and induced partitions are consistent across different demonstrations from the same task. To this end, we select 3 instances of the "Drop Object" task from the annotated test set, and retrieve the induced segmentations of the demonstration. We then visualize these segmentations and motor programs predicted for each of these 3 demonstrations as depicted in Fig. 5, along with the ground truth semantic labels of primitives for each of the demonstrations. The alignment between a recomposed trajectory and the original demonstration also allows us to transfer semantic annotations from a demonstration to the predicted primitives, by simply copying labels from the demonstration to their aligned timepoints in the primitives. Therefore using our small set of annotated demonstrations, we can construct a small library of semantically annotated primitives. Given a novel, unseen demonstration, we can compute its predicted primitives and assign each a semantic label by copying the label of the nearest primitive from the library. This allows us to transfer semantic segmentations from our small annotated test set to unseen demonstrations. Our model's transfer of semantic segmentation achieves label accuracy on the set of 30 held out trajectories (across all 20 tasks) of 58%, while a supervised LSTM baseline that predicts semantic labels from trajectories (trained on 30 annotated test trajectories) achieves 54% accuracy. The consistency of our abstractions coupled with the ability to transfer semantic segmentations across demonstrations shows our model is capable of understanding commonalities across demonstrations, and is able to reason about various demonstrations in terms of motor programs shared across them, despite being trained in an unsupervised manner. Each row represents a different instance of a "Drop Objects" task from the MIME Dataset, while each column represents a time-step in the demonstration. White frames represent predicted segmentation points, while colored boxes represent ground truth semantic annotations. Red boxes are reaching primitives, blue boxes are grasping, orange boxes are placing, and green boxes are returning primitives. We see our model predicts 4 motor programs -reaching, grasping, placing the object a small distance away, and returning. This is consistent with the true semantic annotations of these demonstrations, and the overall sequence of primitives expected of the "Drop Box" task. (a) Sparse Baxter Reacher Task (b) Sparse Baxter Pushing Task Figure 6: RL training curves with and without motor programs. Solid lines denote mean success rate, while shaded region denotes ±1 standard deviation across 10 random seeds. One of our primary motivations for learning motor programs is that they can be composed together to solve downstream robotic tasks. To evaluate whether the motor programs learned by our model are indeed useful for such downstream tasks, we adopt a hierarchical reinforcement learning setup (as described in detail in the supplementary). For a given task, we train a policy to predict the sequence of motor programs to execute. Given the predicted latent representations, each motor program is decoded into its corresponding motion sequence using the previously learned motor program network. We retrieve a sequence of desired joint velocities from this motion sequence and use a joint velocity controller to execute these "low-level" actions on the robot. The motor program network and the joint velocity controller together serve as an "abstraction" of the low-level control that is executed on the robot. Hence, the policy must learn to predict motor programs that correspond to motion sequences useful for solving the task at hand. As demonstrated in Fig. 6, training a policy using motor programs is several orders of magnitude more efficient than training with direct low-level actions. For the sparse reaching task, the motor program policy learns within 50 motor program queries, 2 orders of magnitude speedup in low-level control time-steps. For the sparse pushing task, the motor program policy learns within 1000 motor program queries, or a 2X speedup with respect to low-level control time-steps. We note that executing a motor program corresponds to 50 low level control steps (see appendix for details). However, as these motor programs are executed without environment feedback, the improvement in efficiency in terms of environment interactions is all the more significant. We have presented an unsupervised approach to discover motor programs from a set of unstructured robot demonstrations. Through the insight that learned motor programs should recompose into the original demonstration while being simplistic, we discover a coherent and diverse latent space of primitives on the MIME dataset. We also observed that the learned primitives were semantically meaningful, and useful for efficiently learning downstream tasks in simulation. We hope that the contributions from our work enable learning and executing primitives in a plethora of real-world robotic tasks. It would also be interesting to leverage the learned motor programs in context of continual learning, to investigate how the discovered space can be adapted and expanded in context of novel robotic tasks. We provide additional visualizations of primitives being executed on the real robot below. As mentioned in the main paper, dynamic GIFs of these visualizations may be found at https://sites.google.com/view/discovering-motor-programs/home. Figure 7: Depiction of execution of additional learned primitives on real world Baxter robot. As in Fig. 4, each row is a single primitive, while columns show progress of the primitive over time. Row 1 depicts a left handed returning primitive, row 2 depicts a right handed pushing primitive, row 3 depicts a left handed pushing primitive (in a different configuration to the left handed one), and finally row 4 depicts a right handed twisting primitive. For our hierarchical reinforcement learning experiments, we perform policy learning on two sparse reward tasks on a simulated Baxter Robot: (a) Reaching, and (b) Pushing. For the reaching task, the robot's end-effector needs to reach a specific location in space, while for the pushing task, the robot needs to push a block on the table to a specific location. For the tasks, the policy gets a reward of 1 if it is within 0.05m of the goal and reward of 0 otherwise. We train both our motor program policy and the baseline control policy using Proximal Policy Optimization . Note that the motor program policy outputs the latent representation z. Each z expands into a 10 length trajectory according the motor program network. To reach each of these trajectory states, a PD velocity controller is used for 5 time-steps. The baseline control policy directly outputs the velocity control action. | We learn a space of motor primitives from unannotated robot demonstrations, and show these primitives are semantically meaningful and can be composed for new robot tasks. | 640 | scitldr |
Using modern deep learning models to make predictions on time series data from wearable sensors generally requires large amounts of labeled data. However, labeling these large datasets can be both cumbersome and costly. In this paper, we apply weak supervision to time series data, and programmatically label a dataset from sensors worn by patients with Parkinson's. We then built a LSTM model that predicts when these patients exhibit clinically relevant freezing behavior (inability to make effective forward stepping). We show that when our model is trained using patient-specific data (prior sensor sessions), we come within 9% AUROC of a model trained using hand-labeled data and when we assume no prior observations of subjects, our weakly supervised model matched performance with hand-labeled data. These demonstrate that weak supervision may help reduce the need to painstakingly hand label time series training data. Time series data generated by wearable sensors are an increasingly common source of biomedical data. With their ability to monitor events in non-laboratory conditions, sensors offer new insights into human health across a diverse range of applications, including continuous glucose monitoring BID1, atrial fibrillation detection BID11, fall detection BID2, and general human movement monitoring BID6.Supervised machine learning with sensor time series data can help automate many of these monitoring tasks and enable medical professionals make more informed decisions. However, developing these supervised models is challenging due to the cost and difficultly in obtaining labeled training data, especially in settings with considerable inter-subject variability, as is common in human movement research BID5. Traditionally, medical professionals must hand label events observed in controlled laboratory settings. When the events of interest are rare this process is time consuming, expensive, and does not scale to the sizes needed to train robust machine learning models. Thus there is a need to efficiently label the large amounts of data that machine learning algorithms require for time series tasks. In this work, we explore weakly supervised BID10 ) models for time series classification. Instead of using manually labeled training data, weak supervision encodes domain insights into the form of heuristic labeling functions, which are used to create large, probabilistically labeled training sets. This method is especially useful for time series classification, where the sheer number of data points makes manual labeling difficult. As a motivating test case, we focus on training a deep learning model to classify freezing behaviors in people with Parkinson's disease. We hypothesize that by encoding biomechanical knowledge about human movement and Parkinson's BID5 into our weakly supervised model, we can reduce the need for large amounts of hand labeled data and achieve similar performance to fully supervised models for classifying freezing behavior. We focus on two typical clinical use cases when making predictions for a patient: where we have no prior observations of the patient, and where we have at least one observation of the patient. In weak supervision, noisy training labels are programmatically generated for unlabeled data using several heuristic labeling functions which encode specific domain knowledge. These labeling functions are modeled as a generative process which allows us to denoise the labels by learning their correlation structure and accuracies BID10. These labeling functions are of the form λ: X → Y ∪ ∅ which take in a single candidate x ∈ X, and output a label y ∈ Y or ∅, if the function abstains. Using n labeling functions on m unlabeled data points, we create a label matrix L = (Y ∪ ∅) m×n. We then create a generative model from this label matrix and three factor types (labeling propensity, accuracy, and pairwise correlation) of labeling functions: DISPLAYFORM0 where C are the potential correlations. Next, we concatenate all these factors for a given data point x i and all labeling functions j = 1...n, ing in φ i (L, Y), and learn the parameters w ∈ R 2n+|C| to maximize the objective: DISPLAYFORM1 With this generative model, we can then generate probabilistic training labels,Ỹ = pŵ(Y |L) wherê w are the learned parameters in the label model. Using these probabilistic labels, we can train a discriminative model that we aim to generalize beyond the information encoded in the labeling functions. We do this by minimizing the expected loss with respect toŶ:θ DISPLAYFORM2 As we increase the amount of unlabeled data, we increase predictive performance BID9. We use a dataset that contains series of measurements from 36 trials from 9 patients that have Parkinson's Disease (PD) and exhibit freezing behavior. PD is a neurodegenerative disease marked by tremor, loss of balance, and other motor impairments, that affects over 10 million people worldwide. Freezing of gait (FOG) -a sudden and brief episode where an individual is unable to produce effective forward stepping BID3 BID4 -is one of the disabling problems caused by PD, and often leads to falls BID0.In this dataset, subjects walked in a laboratory setting that the investigators designed to elicit freezing events. Leg or shank angular velocity was measured during the forward walking task using wearable inertial measurement units (sampled at 128 Hz), which were positioned in a standardized manner for all subjects and tasks on the top of the feet, on both shanks, on the lumbar, and chest trunk regions. In this work, we focus on sensor streams from both shanks, though it is straightforward to include the other sensor streams (e.g. lumbar, feet, etc.). From each Turning and Barrier Course run, we extract left and right ankle gyroscope data in the z-direction from each trial (up to 4 trials per course run), along with the gold labels for these trials, which were manually recorded by a neurologist. We combine the data from all trials from all course runs, and segment the sensor data by gait cycle (of the right leg) which is computed analytically from the angular velocity of ankle sensor data. In this case, we define gait cycle as the time period between two successive peaks on an angular velocity versus time plot. We then define a single candidate to be x a×b ∈ X where a is the number of sensor streams and b is the sequence length. For our task, a = 2 since we use the left and right ankle sensor streams, and b is the sequence length for a single gait cycle (which slightly varies from cycle to cycle). To programatically label data, we use five labeling functions which draw on domain specific knowledge and empirical observations. Specifically, these labeling functions target features which can distinguish freezing and non-freezing events. For all labeling functions, we assign positive, negative, or abstain labels based on empirically measured threshold values from the validation set. For example, one heuristic we employ uses stride time arrhythmicity BID7 BID8, which we calculate as average coefficient of variation for the past 3 stride times of the left and right leg. For this function, we label a candidate as freezing if the arrhythmicity of that candidate is greater than 0.55, and not freezing if the arrhythmicity is less than 0.15. If arrhythmicity for a particular candidate is in between these two values, we abstain. Other labeling functions we use involve the swing angular range of the shank, and the amplitude and variance in shank angular velocity. Using these labeling functions, we build a generative label model and predict probabilistic labels y ∈ Y for each candidate x a×b ∈ X in the training set FIG0 ). See TAB0 for the individual performance of each labeling function. We then train a discriminative model on the probabilistic labels from the generative model that incorporates the labeling functions discussed in the last section. We use a single layer bi-directional LSTM and hidden state dimension 300 for our end model that takes in a multivariate sensor stream as input. Since we use time series data from only the left and right ankle sensors, our input is two dimensional. In order to provide longer temporal context, we pass in a windowed version of each candidate that includes the last three gait cycles and the next gait cycle FIG0 ). Since sequence length of a single gait cycle slightly varies, we then pad these sequences and truncate any sequences over a pre-defined maximum sequence length. To provide more contextual signal, we also add multiplicative attention to pool over the hidden states in the LSTM. We evaluate our weakly supervised model in two typical clinical settings, and compare performance with that of a fully supervised model. In the first setting, we split the data into training/validation/testing by trials/sessions. In this setting, both the validation and testing set have a single trial from each patient, and the training set has one or more trials from each patient. In the second setting, we split data by patient. In this case, the testing (and validation) set contains all the trials from a novel patient. We then cross-validate on each patient. These are summarized in TAB1.From the session splits setting, we note that our weakly supervised model comes within 10 points in F1 score and 8 points in AUROC of the fully supervised (hand-labeled) model. In the patient splits setting, our weakly supervised model matches the performance of the fully supervised model. In both supervision types, it is clear that our end to end system performs significantly better when it has seen 1 or more sessions of a particular patient before. In the patient splits setting, our system has difficulty generalizing to certain patients. For example, in TAB2 we see that the fully supervised model has trouble predicting freezing events for patients P1, P7, and P8 in particular. These difficulties are inherent to the problem -each patient exhibits different freezing behaviors, and some, such as P1, P2, and P8, have relatively rare freezing events. This highlights why, at least for this task, it is critical to have as many subjects as possible in the dataset -an objective that is far easier to meet if hand labels are not required. Our work demonstrates the potential of weak supervision on time series tasks. In both experiments, our weakly supervised models performed close to or match the fully supervised models. Further, the amount of data available for the weak supervision task was fairly small -with more unlabeled data, we expect to be able to improve performance BID9. These show that costly and time-intensive hand labeling may not be required to get the desired performance of a given classifier. In the future, we plan to add more and different types of sensor streams and modalities (e.g., video). We also plan to use labeling functions to better model the temporal correlation between individual segments of these streams, which can potentially improve our generative model and hence end to end performance. | We demonstrate the feasibility of a weakly supervised time series classification approach for wearable sensor data. | 641 | scitldr |
Learning semantic correspondence between the structured data (e.g., slot-value pairs) and associated texts is a core problem for many downstream NLP applications, e.g., data-to-text generation. Recent neural generation methods require to use large scale training data. However, the collected data-text pairs for training are usually loosely corresponded, where texts contain additional or contradicted information compare to its paired input. In this paper, we propose a local-to-global alignment (L2GA) framework to learn semantic correspondences from loosely related data-text pairs. First, a local alignment model based on multi-instance learning is applied to build the semantic correspondences within a data-text pair. Then, a global alignment model built on top of a memory guided conditional random field (CRF) layer is designed to exploit dependencies among alignments in the entire training corpus, where the memory is used to integrate the alignment clues provided by the local alignment model. Therefore, it is capable of inducing missing alignments for text spans that are not supported by its imperfect paired input. Experiments on recent restaurant dataset show that our proposed method can improve the alignment accuracy and as a by product, our method is also applicable to induce semantically equivalent training data-text pairs for neural generation models. Learning semantic correspondences between the structured data (e.g., slot-values pairs in a meaning representation (MR)) and associated description texts is one of core problem in NLP community , e.g., data-to-text generation produces texts based on the learned semantic correspondences. Recent data-to-text generation methods, especially neural-base methods which are data-hungry, adopt data-text pairs collected from web for training. Such collected corpus usually contain loosely corresponded data text pairs , where text spans contain information that are not supported by its imperfect structured input. Figure 1 depicts an example, where the slot-value pair Price=Cheap can be aligned to text span low price range while the text span restaurant doesn't supported by any slot-value pair in paired input MR. Most of previous work for learning semantic correspondences (; ; ;) focus on characterizing local interactions between every text span with a corresponded slots presented in its paired MR. Such methods cannot work directly on loosely corresponded data-text pairs, as setting is different. In this work, we make a step towards explicit semantic correspondences (i.e., alignments) in loosely corresponded data text pairs. Compared with traditional setting, which only attempts inducing alignments for every text span with a corresponded slot presented in its paired MR. We propose a Local-to-Global Alignment (L2GA) framework, where the local alignment model discovers the correspondences within a single data-text pair (e.g., low price range is aligned with the slot Price in Figure 1) and a global alignment model exploits dependencies among alignments presented in the entire data-text pairs and therefore, is able to induce missing attributes for text spans not supported in its noisy input data (e.g., restaurant is aligned with the slot EatType in Figure 1). Specially, our proposed L2GA is composed of two parts. The local alignment model is a neural method optimized via a multi-instance learning paradigm which automatically captures correspondences by maximizing the similarities between co-occurred slots and texts within a data-text pair. Our proposed global alignment model is a memory guided conditional random field (CRF) based sequence labeling framework. The CRF layer is able to learn dependencies among semantic labels over the entire corpus and therefore is suitable for inferring missing alignments of unsupported text spans. However, since there are no semantic labels provided for sequence labeling, we can only leverage limited supervision provided in a data-text pair. We start by generating pseudo labels using string matching heuristic between words and slots (e.g., Golden Palace is aligned with Name in Figure 1). The pseudo labels in large portion of unmatched text spans (e.g., low price and restaurant cannot be directly matched in Figure 1), we tackle this challenge by: a) changing the calculation of prediction probability in CRF layer, where we sum probabilities over possible label sequences for unmatched text spans to allow inference on unmatched words; b) incorporating alignment produced by the local alignment model as an additional memory to guide the CRF layer, therefore, the semantic correspondences captured by local alignment model can together work with the CRF layer to induce alignments locally and globally. We conduct experiments of our proposed method on a recent restaurant dataset, E2E challenge benchmark (a), show that our framework can improve the alignment accuracy with respect to previous methods. Moreover, our proposed method can explicitly detect unaligned errors presented in the original training corpus and provide semantically equivalent training data-text pairs for neural generation models. Experimental also show that our proposed method can improve content consistency for neural generation models. Here, we provide a brief description of learning alignments in loosely corresponded data-text pairs. Given a corpus with paired meaning representations (MR) and text descriptions {(R, X)} N i=1. The input MR R = (r 1, . . ., r M) is a set of slot-value pairs r j = (s j, v j), where each r j contains a slot s j (e.g., Price) and a value v j (e.g., Cheap). According to s j, the corpus has K unique slots in total, where K >= M. The corresponding description X = (x 1, . . ., x T) is a sequence of words describing the MR. The task is to match every word x i in text X with a possible slot. Note that for a data-text pair, not all slot-value pairs are mentioned in paired text, and not all words in text can be grounded to one of M slots in paired MR. However, some of unaligned words can be ground to one of K slots in the whole corpus. An example of alignments in a data-text pair is shown in Figure 1. The example displays differences between MR and text: contradiction (Rating:low corresponds to the text span:highly recommended), extra slots in text (EatType:restaurant). We add a special label NULL indicating words without any specific semantic annotation (e.g., stopwords). In the next section, we present our approach to address this task. Our proposed method is a local-to-global alignment (L2GA) model, as shown in Figure 2. It consists of two modules. The local model first encodes both description text X and its paired MR R using contextualized encoders, then acquires semantic alignments by computing similarities in between words and slot-value pairs presented in its paired MR R. As the input MR can be incomplete, a global model with a specific CRF layer is proposed to exploit dependencies among alignments over the entire corpus and therefore produces possible semantic labels for text spans not supported in the paired MR. Moreover, to incorporate the alignment guidance provided by the local model, a specific memory is integrated into CRF layer to make the final alignment decision. The local model tries to induce semantic labels for words in text X with respect to its paired input MR. Given a data-text pair (R, X), we can only assume that words in the description X are positively related for some slot-value pairs in R but the exact alignments are not provided. One possible way is to discover the fine-grained annotations (i.e., word alignments) from the coarse level supervisions (i.e., the similarity between a MR-text pair). Following , we formulate this task into a multi-instance learning problem . We first introduce the encoders for input MR R and description text X, then the alignment objectives to acquire the word level annotations for text X. MR Encoder: A slot-value pair r in MR can be treated as a short sequence w 1,..., w n by concatenating words in its slot and value. The word sequence is first represented as a sequence of word embedding vectors (v 1, . . ., v n) using a pre-trained word embedding matrix E w, and then passed through a bidirectional LSTM layer to yield the contextualized representations H = (h 1, . . ., h n). To produce a summary context vector, we adopt the same self-attention structure in to obtain the vector of slot-value pair c, due to the effectiveness of self-attention modules over variable-length sequences. where W s is a trainable parameter and β is the learned importance. We also embed each slot s i into a slot vector as where E z is a trainable slot embedding matrix. Sentence Encoder: For description X = (x 1, . . ., x T), each word x t is first embeded into vector e t by concatenating the word embedding and character-level representation generated with a group of convolutional neural network (CNNs). Then we feed the word vectors e 1,..., e T to a bidirectional LSTM to obtain contextualized vectors U = (u 1, . . ., u T). Alignment Objective: Our goal is to maximize the similarity score between the MR-text pair (R, X), and we will also learn the contribution of word-level annotations for words and slot-value pairs. Concretely, we first embed slot-value pairs in the input R = (r 1, ..., r M) into context vectors c 1,..., c M using the MR encoder defined in Eq.1. Similarly, we obtain the contextual vectors u 1,..., u T of description X using the sentence encoder defined in Eq.3. This similarity between MR-text pair is in turn defined on the top of the similarity scores among vector representations of slot-value pairs in R and words in description X as follows. where · refers inner product of two vectors. The function in Eq.4 aims to align each word with the best scoring slot-value pair. Note that each word x t is aligned with a slot-value pair r i if the similarity (i.e., inner product between two vectors) is larger than a threshold. To train the local alignment model, The loss function defined in Eq.5 is to encourage related MR R and description X to achieve higher similarity than other MR R = R and texts X = X: Since the data-text pairs are loosely corresponded, there exists text spans not supported by its noisy paired input. To induce semantic labels for those text spans, our proposed global alignment model is built on a CRF based sequence labeling framework which is capable of leveraging dependencies among alignments. Compared to conventional sequence labeling problem, our scenario differs in two aspects: i) lacking training labels for sequence labeling; ii) leveraging alignment information provided by the local alignment model. To overcome the issue of lacking word-level annotations, we first generate pseudo labels for words in texts by exact string matching, where conflicted matches are resolved by maximizing the total number of matched tokens . Based on the of dictionary matching, each word falls into one of three categories: 1) it belongs to an entity mention with one slot presented in its paired MR; 2) it belongs to an (unknown) entity where its slot is either not directly labeled using string matching or not represented in its paired MR; 3) it is marked as a non-entity 1. To allow inducing semantic labels for words with unknown types, we change the sequence paths in CRF layer. To incorporate semantic annotations learned by local model, particularly for text spans that are not directly recognized by string heuristics and mislabeled as an unknown entities (e.g., affordable in Figure 2), the alignments are treated as a soft memory to integrate into the CRF layer. Modified LSTM-CRF: In conventional LSTM-CRF based sequence labeling model , given the text description X = {x t} T t=1 and the pseudo labels Y = {y t} T t=1. We first obtain contextual representations U for words in description X using the Eq. 3, and context vector u t for word x t is decoded by a linear layer W c into the label space to compute the score P t,yt for label y t. On top of the model, a CRF layer is applied to capture the dependencies among predicted labels. We define the score of the predicted sequence, the score of the predicted (y 1, ..., y T) as: where, Φ yt,yt+1 is the transition probability from a label y t to its next label y t+1. Φ is a (K + 2) × (K +2) matrix, where K is the number of distinct labels (i.e., unique slots in the entire corpus). Two additional labels start and end are used (only used in the CRF layer) to represent the beginning and end of a sequence, respectively. The conventional CRF layer maximizes the probability of the only valid label sequence. However, there are entities with unknown types in our scenario (e.g., text spans restaurant and affordable Under review as a conference paper at ICLR 2020 in Figure 2 are unknown entities). We instead maximize the total probability of all possible label sequences by enumerating all the possible tags for entities with unknown types. The optimization goal is defined as: where Y X refers to all possible label sequences for X, and Y possible contains all the possible label sequences for entities with unknown type. Note that, if there are no entities with unknown type in description text X, it is equivalent to the conventional CRF. Integrate Local Alignment Clues: The local alignment model can provide alignment supervisions for words that are lexically different but semantically relevant to slot-value pairs in its paired MR. To incorporate the induced semantic labels provided by local alignment model, we design a specific memory into sequence labeling framework. Specially, for each word x t in description X, we select the most probable slot s i by computing similarity provided by local alignment model in Eq. 4, and compute the slot representation d t as follows where α t,i refers to the probability that word x t is related to slot s i in MR and z i is the slot embedding for slot s i defined in Eq. 2. We then utilize the alignment information d t to help the calculation of the prediction score P t in Equation 6. Concretely, we modified the Eq. 6 as following: where [,] refers to concatenation of two vectors. In this way, the alignments produced by local alignment model can act as a guidance to help inducing the labels of entities in texts. During training, we optimize the global model by minimizing negative log-likelihood p(Y |X) of the score defined in Eq. 8 for path Y given the text description X. We optimize the local and global model jointly using the following training loss: where L co is the alignment objective of local alignment model defined in Eq.5 and λ is a hyper parameter and we set λ to 1 according to the validation set. For inference, we apply Viterbi decoding to obtain the alignments for description texts by maximizing the score defined in Eq. 7. Our experiments are conducted on E2E challenge (b) dataset, which aims at verbalizing all information from the MR. It has 42,061, 4,672 and 4,693 MR-text pairs for training, validation and testing, respectively. Note that every input MR in this dataset has 8.65 different references on average. Our proposed model produces alignments based on the unique slots presented in the entire dataset. The unique slots in this dataset are {N ame, N ear, EatT ype, Rating, F ood, P rice, Area, F amilyF riendly}. It is difficult to evaluate the accuracy of alignment for the entire corpus, since the alignments are not provided in the original data. Due to the ambiguity of alignment boundaries (e.g., it is reasonable to tag all three words in price is low as Price or a single word low as Price), different alignment models have different alignment boundaries accordingly. Instead, alignments can be used to reproduce a refined MR by recovering slot-value pairs using the detected spans and its corresponding labels (e.g., word price is low and its label Price refers to a slot-value pair Price:low), more details in Appendix A.2. To make fair comparisons, we evaluate the alignments by its produced MR. The testset contains 630 unique input MRs, we randomly sample a reference for each MR, and recruited three human annotators to label the 630 data-text pairs. The annotators were required to refine original input MRs if reference text contains contradicted or unsupported facts 2. We calculate the precision and recall for the refined MR produced by alignment models with the annotated one. We compare our proposed alignment model with the following neural baselines: i) MIL , which refers to the local model. Note that each word is assigned to a slot if the semantic similarity defined in Eq.4 is larger than 0.1; ii) Distant LSTM-CRF , which is a dictionary based sequence labeling model for distant supervised name entity recognition (NER). We make adaptation by treating the paired MR as the dictionary to create initial training labels described in Section 3.2 and train a LSTM-CRF model based on the pseudo training data; iii) Modified LSTM-CRF, which is our proposed global model without leveraging local alignment information as described in Section 3.2. Table 1 presents the of our proposed method (L2GA) with other baselines. the MIL is the local alignment model, which can only leverage the information within a data-text pair. Therefore it is incapable of inducing potential alignments for text spans that are not supported by its paired MR. While our proposed method L2GA can exploit dependencies among alignments globally, therefore, improves the overall alignment performance (11.43% F1 improvement with respect to MIL). The other two methods are distant supervised sequence labeling approaches, which can be treated as simpler variations of our proposed global alignment model. The Distant LSTM-CRF performs worse than Modified LSTM-CRF which indicates the necessity of exploring all possible sequences in CRF layer for unknown entities. In this way, the model is able to induce a potential semantic labels for unknown type entities. Additionally, both Distant LSTM-CRF and Modified LSTM-CRF models utilize the information in its paired MR only in the creation of pseudo labels. Labels created by string matching is mislabeled as unmatched entities for text spans that are semantically equivalent but lexically different to some slot-value pairs (e.g., afforable is closely related to the slot-value pair Price:Cheap in Figure 2). While our proposed method can leverage the alignment information provided in its paired MR by the local alignment model simultaneously and therefore achieves substantial improvements. As our proposed method is target on learning the alignments in loosely related data-text pairs, we pick data-text pairs in testset where the human annotated MR contains additional or contradicted slot-value pairs compared to the original MR, and we report the performance of each method on the Noisy data-text pairs. Results in Table 1 shows that the performance of local model MIL decrease dramatically, while global models such as Modified LSTM-CRF and L2GA are less sensitive, which proves the necessity of using a global model in learning alignments for loosely related data-text pairs. Our proposed method L2GA outperforms the Modified LSTM-CRF in both settings. The further illustrate that both local and global models are essential for learning alignments in loosely related data-text pairs. We report detailed alignment F1 scores of our proposed method under each slot shown in Table 2. Our proposed L2GA achieves best in 4 out of 8 slots. The local model performs Under review as a conference paper at ICLR 2020 Table 3: Different combinations of local models with sequence labeling framework bad in EatType, which is one of the most common missing slot in the training set. The slot familyFriendly contains various expressions in corresponding texts, where Modified LSTM-CRF performs a lot worse than our proposed L2GA. The indicates the necessity of integrating alignment guidance from the local model. We also investigate different ways of incorporating local model with the sequence labeling framework. A straight forward way is to create new pseudo labels for sequence labeling framework using the alignments produced by local models and train a LSTM-CRF model based on the new training labels. Table 3 gives the . The of the separate model performs worse than our proposed L2GA, which indicates that accurate training labels are essential to sequence labeling. L2GA dynamically integrate the provided by the local model without introducing label noise for training, therefore achieves better . In this section, we provide an extrinsic evaluation by testing whether alignments can help neural generation. Neural generation models trained on noisy data-text pairs suffers from hallucination , where the generated texts produce contradicted or irrelevant facts with respect to its paired input. Alignments can produce a refined MR for each data-text pair, therefore, we can create a refined training corpus by applying our proposed method L2GA in training dataset. We use the new training corpus to train a sequence-to-sequence (S2S) generation model. To evaluate the correctness of generation, a well-crafted rule-based aligner built by is adopted to approximately reflect the semantic correctness. The error rate is calculated by matching the slot values in output texts containing missing or conflict slots in the realization given its input MR. The generation are shown in Table 4. Vanilla S2S model trained on loosely related data-text pairs performs poorly in generation correctness. After training on the corpus refined by our proposed L2GA method, S2S model can reduce the inconsistent errors in a large margin. The also indicates the value of studying alignments in the setting of loosely related data-text pairs, which can be of help to automatically reduce data noise in large datasets. 4.6 QUALITATIVE ANALYSIS Figure 3 gives the alignment produced by different models. We can see that local model MIL cannot induce the label for text spans kid friendly as it is contradicted with the slot-value pair FamilyFriendly:no. While global models can induce the semantic label for the text span kid friendly with the corresponding label FamilyFriendly. Moreover, the Modified LSTM-CRF has difficulty in labeling lexically different but semantically equivalent word highly. While L2GA Under review as a conference paper at ICLR 2020 L2GA: The Cricketers is a kid friendly restaurant that serves English food near All Bar One in the riverside area. It has a price range of 20-25 pounds and is a highly rated restaurant. Food Near Area Price Rating EatType The Cricketers is a kid friendly restaurant that serves English food near All Bar One in the riverside area. It has a price range of 20-25 pounds and is a highly rated restaurant. can dynamically integrate the alignment provided by the local model, therefore produce the semantic labels Rating for text span highly rated correctly. Previous work exploiting loosely aligned data and text corpora have mostly focused on discovering verbalisation spans for data units. These line of work usually follows a two stage paradigm: firstly, data units are aligned with sentences from related corpora using heuristics and then subsequently extra content is discarded in order to retain only text spans verbalising the data. use a measure of association between data units and words to obtain verbalisation spans. extract patterns from paths in dependency trees. One exception is , the induced alignments are used to guide the generation. Our work takes a step further to also induce alignments for text spans not supported by the noisy paired input with possible semantics. Our work is also related to previous work on extracting information from user queries with the backend data structure. Most of these approaches contain two steps. Initially, a separate model is applied to match the unstructured texts with relevant input records and then an extraction model is learned based on collected annotations. and train a language model on data records to identify related text spans in book description. Several approaches train a CRF based extractor to detect the related text spans . apply a generalized expectation criteria to learn alignments between database and the texts, and train the information extractor to induce semantic annotations for text spans. Compared to these work, our approach is an unified neural based alignment model which avoids the error propagation of each step. In this paper, we study the problem of learning alignments in loosely related data-text pairs. We propose a local-to-global framework which not only induces semantic correspondences for words that are related to its paired input but also infers potential labels for text spans that are not supported by its incomplete input. We find that our proposed method improves the alignment accuracy, and can be of help to reduce the noise in original training corpus. In the future, we will explore more challenging datasets with more complex data schema. Under review as a conference paper at ICLR 2020 are 300 and 100 respectively. The dimensions of trainable hidden units in LSTMs are all set to 400. We first pre-train our local model for 5 epochs and then train our proposed local-to-global model jointly with 10 epochs according to validation set. During training, we regularize all layers with a dropout rate of 0.1. We use stochastic gradient descent (SGD) for optimisation with learning rate 0.015. The gradient is truncated by 5. Given a MR-text pair (R, X) along with its induced alignments Y, our goal is to recover a refined MR R by making use alignments Y. Intuitively, values for several slots belong to string values (e.g., text span The Cricketers with semantic label Name), where the value is directly recovered by the corresponding text spans (e.g., Name:The Cricketers). In the E2E dataset, there are two slots with a string value (i.e., Name and Near). The rest of slots use categorical values. To recover for categorical values, we apply a simple retrieval based method. Specifically, we collect the text spans with the detect labels (i.e., slots) in the training corpus with its corresponding slot-value pair presented in the MR (e.g., text span kid friendly with slot FamilyFriendly). Since the MR can be inaccurate, text spans with a specific label might have multiple referring slot-value pairs (e.g., text span kid friendly has two options FamilyFriendly:yes and FamilyFriendly:no). We calculate the frequency of candidate slot-value pairs, and use the most frequent one (e.g., kid friendly is recovered to FamilyFriendly:yes as it co-occurs with FamilyFriendly:yes a lot more than FamilyFriendly:no). | We propose a local-to-global alignment framework to learn semantic correspondences from noisy data-text pairs with weak supervision | 642 | scitldr |
Imitation learning algorithms provide a simple and straightforward approach for training control policies via standard supervised learning methods. By maximizing the likelihood of good actions provided by an expert demonstrator, supervised imitation learning can produce effective policies without the algorithmic complexities and optimization challenges of reinforcement learning, at the cost of requiring an expert demonstrator -- typically a person -- to provide the demonstrations. In this paper, we ask: can we use imitation learning to train effective policies without any expert demonstrations? The key observation that makes this possible is that, in the multi-task setting, trajectories that are generated by a suboptimal policy can still serve as optimal examples for other tasks. In particular, in the setting where the tasks correspond to different goals, every trajectory is a successful demonstration for the state that it actually reaches. Informed by this observation, we propose a very simple algorithm for learning behaviors without any demonstrations, user-provided reward functions, or complex reinforcement learning methods. Our method simply maximizes the likelihood of actions the agent actually took in its own previous rollouts, conditioned on the goal being the state that it actually reached. Although related variants of this approach have been proposed previously in imitation learning settings with example demonstrations, we present the first instance of this approach as a method for learning goal-reaching policies entirely from scratch. We present a theoretical linking self-supervised imitation learning and reinforcement learning, and empirical showing that it performs competitively with more complex reinforcement learning methods on a range of challenging goal reaching problems. Reinforcement learning (RL) algorithms hold the promise of providing a broadly-applicable tool for automating control, and the combination of high-capacity deep neural network models with RL extends their applicability to settings with complex observations and that require intricate policies. However, RL with function approximation, including deep RL, presents a challenging optimization problem. Despite years of research, current deep RL methods are far from a turnkey solution: most popular methods lack convergence guarantees or require prohibitive numbers of samples . Moreover, in practice, many commonly used algorithms are extremely sensitive to hyperparameters . Besides the optimization challenges, another usability challenge of RL is reward function design: although RL automatically determines how to solve the task, the task itself must be specified in a form that the RL algorithm can interpret and optimize. These challenges prompt us to consider whether there might exist a general method for learning behaviors without the need for complex, deep RL algorithms. Imitation learning is an alternative paradigm to RL that provides a simple and straightforward approach for training control policies via standard supervised learning methods. By maximizing the likelihood of good actions provided by an expert demonstrator, supervised imitation learning can produce effective policies without the algorithmic complexities and optimization challenges of RL. Supervised learning algorithms in deep learning have matured to the point of being robust and reliable, and imitation learning algorithms have demonstrated success in acquiring behaviors robustly and reliably from high-dimensional sensory data such as images . The catch is that imitation learning methods require an expert demonstrator -typically a human -to provide a number of demonstrations of optimal behavior. Obtaining expert demonstrations can be challenging; the large number of demonstrations required limits the scalability of such algorithms. In this paper, we ask: can we use ideas from imitation learning to train effective policies without any expert demonstrations, retaining the benefits of imitation learning, but making it possible to learn goal-directed behavior autonomously from scratch? The key observation for making progress on this problem is that, in the multi-task setting, trajectories that are generated by a suboptimal policy can serve as optimal examples for other tasks. In particular, in the setting where the tasks correspond to reaching different goal states, every trajectory is a successful demonstration for the state that it actually reaches. Similar observations have been made in prior works as well (; ; ; ;), but have been used to motivate data reuse in off-policy RL or semiparametric methods. Our approach will leverage this idea to obtain near-optimal goal-conditioned policies without RL or reward functions. The algorithm that we study is, at its core, very simple: at each iteration, we run our latest goalconditioned policy, collect data, and then use this data to train a policy with supervised learning. Supervision is obtained by noting that each action that is taken is a good action for reaching the states that actually occurred in future time steps along the same trajectory. This algorithm resembles imitation learning, but is self-supervised. This procedure combines the benefits of goal-conditioned policies with the simplicity of supervised learning, and we theoretically show that this algorithm corresponds to a convergent policy learning procedure. While several prior works have proposed training goal-conditioned policies via imitation learning based on a superficially similar algorithm , to our knowledge no prior work proposes a complete policy learning algorithm based on this idea that learns from scratch, without expert demonstrations. This procedure reaps the benefits of off-policy data re-use without the need for learning complex Q functions or value functions. Moreover, we can bootstrap our algorithm with a small number of expert demonstrations, such that it can continue to improve its behavior self supervised, without dealing with the challenges of combining imitation learning with off-policy RL. The main contribution of our work is a complete algorithm for learning policies from scratch via goal-conditioned imitation learning, and to show that this algorithm can successfully train goalconditioned policies. Our theoretical analysis of self-supervised goal-conditioned imitation learning shows that this method optimizes a lower bound on the probability that the agent reaches the desired goal. Empirically, we show that our proposed algorithm is able to learn goal reaching behaviors from scratch without the need for an explicit reward function or expert demonstrations. Our work addresses the same problem statement as goal conditioned reinforcement learning (RL) (; ; ;), where we aim to learn a policy via RL that can reach different goals. Learning goal-conditioned policies is quite challenging, especially when provided only sparse rewards. This challenge can be partially mitigated by hindsight relabeling approaches that relabel goals retroactively (; ;). However, even with relabelling, the goalconditioned optimization problem still uses unstable off-policy RL methods. In this work, we take a different approach and leverage ideas from supervised learning and data relabeling to build offpolicy goal reaching algorithms which do not require any explicit RL. This allows GCSL to inherit the benefits of supervised learning without the pitfalls of off-policy RL. While, in theory, on-policy algorithms might be used to solve goal reaching problem as well, their inefficient use of data makes it challenging to apply these approaches to real-world settings. Our algorithm is based on ideas from imitation learning via behavioral cloning but it is not an imitation learning method. While it is built on top of ideas from supervised learning, we are not trying to imitate externally provided expert demonstrations. Instead, we build an algorithm which can learn to reach goals from scratch, without explicit rewards. A related line of work has explored how agents can leverage expert demonstrations to bootstrap the process of reinforcement learning. While GCSL is an algorithm to learn goal-reaching policies from scratch, it lends itself naturally to bootstrapping from demonstrations. As we show in Section 5.4, GCSL can easily incorporate demonstrations into off-policy learning and continue improving, avoiding many of the challenges described in Kumar et al. (2019b). Recent imitation learning algorithms propose methods that are closely related to GCSL. aim to learn general goal conditioned policies from "play" data collected by a human demonstrator, and perform goal-conditioned imitation learning where expert goal-directed demonstrations are relabeled for imitation learning. However, neither of these methods are iterative, and both require human-provided expert demonstrations. Our method instead iteratively performs goal-conditioned behavioral cloning, starting from scratch. Our analysis shows that performing such iterated imitation learning on the policy's own sampled data actually optimizes a lower bound on the probability of successfully reaching goals, without the need for any expert demonstrations. The cross-entropy method , self-imitation learning , rewardweighted regression , path-integral policy improvement , reward-augmented maximum likelihood ), and proportional cross-entropy method selectively weight policies or trajectories by their performance during learning, as measured by then environment's reward function. While these may appear procedurally similar to GCSL, our method is fully self-supervised, as it does not require a reward function, and is applicable in the goal-conditioned setting. Additionally, our algorithm continues to perform well in the purely off-policy setting, where no new data is collected, a key difference from other algorithms . A few works similar to ours in spirit study the problem of learning goal-conditioned policy without external supervision. Zero-shot visual imitation uses an inverse model with forward consistency to learn from novelty seeking behavior, but lacks convergence guarantees and requires learning a complex inverse model. Semi-parametric methods learn a policy similar to ours but do so by building a connectivity graph over the visited states in order to navigate environments, which requires large memory storage and computation time that increases with the number of states. Goal Reaching We consider the goal reaching in an environment defined by the tuple S, A, T, ρ(s 0), T, p(g). S and A correspond to the state and action spaces respectively, T (s |s, a) to the transition kernel, ρ(s 0) to the initial state distribution, T the horizon length, and p(g) to the distribution over goal states g ∈ S. We aim to find a time-varying goal-conditioned policy π(·|s, g, h): S × S × [T] → ∆(A), where ∆(A) is the probability simplex over the action space A and h is the remaining horizon. We will say that a policy is optimal if it maximizes the probability the specified goal is reached at the end of the episode: It is important to note here that we are not considering the notion of optimality to be finding the shortest path to the goal, but merely saying that the trajectory must reach the goal at the end of T time-steps. This problem can equivalently be cast in reinforcement learning. The modified state space S = S × S × [T] contains the current state, goal, and the remaining horizon; the modified appropriately handles the modified state space; and the reward function r((s, g, h)) = 1(s = g, h = 0) depends on both the goal and the time step. Because of the special structure of this formulation, off-policy RL methods can relabel an observed transition ((s, g, h), a, (s, g, h − 1)) to that of a different goal g and different horizon h like ((s, g, h), a, (s, g, h − 1)). A common approach is to relabel trajectories with the goal they actually reached instead of the commanded goal, and often referred to as hindsight experience replay . We consider algorithms for goal-reaching that use behavior cloning, a standard method for imitation learning. In behavior cloning for goal-conditioned policies, an expert policy provides demonstrations for reaching some target goals at the very last timestep, and we aim to find a policy that best predicts the expert actions from the observations. More formally, Goal-conditioned imitation learning investigate a similar formalism that makes an additional assumption on the quality of the expert demonstrations: that the expert is optimal not just for reaching s T, but also optimal for reaching all the states s 1,... s T −1 preceding it. This corresponding policy is for t, h > 0 and t + h ≤ T. Note that implement a special case of this objective where the policy is independent of the horizon. In the next section, we will discuss how repeatedly alternating between data collection and goal-conditioned imitation learning can be used to learn a goal-reaching policy. Perhaps surprisingly, this procedure optimizes the objective in Equation 1, without relying on expert demonstrations. The goal-conditioned imitation learning in prior work show that expert demonstrations can provide supervision not only for the task the expert was aiming for, but also for reaching any state along the expert's trajectory . Can we design a procedure that uses goal-conditioned behavior cloning as a subroutine, that does not need any expert demonstrations, but that nonetheless optimizes a well-defined reward function? In this work, we show how the idea of imitation learning with data relabeling can be re-purposed to construct a learning algorithm that is able to learn how to reach goals from scratch without any expert demonstrations. We shed light on the reasons that imitation learning with data relabeling is so powerful, and how building an iterative algorithm out of this procedure gives rise to a method that optimizes a lower bound on a reinforcement learning objective, while providing a number of benefits over standard RL algorithms. It is important to note here that we are not proposing an imitation learning algorithm, but an algorithm for learning how to reach goals from scratch without any expert demonstrations. We are simply leveraging ideas from imitation learning to build such a goal reaching algorithm. Figure 1: Goal conditioned supervised learning: We can learn how to reach goals by simply sampling trajectories, relabeling them to be optimal in hindsight and treating them as expert data, and then performing supervised learning via behavior cloning. First, consider goal conditioned imitation learning via behavioral cloning with demonstrations (Equation 2). This scheme works well given expert data D, but expert data is unavailable when we are learning to reach goals from scratch. Can we use goal conditioned behavior cloning to learn how to reach goals from scratch, without the need for any expert demonstrations? To leverage behavior cloning when learning from scratch, we use the following insight: while an arbitrary trajectory from a sub-optimal policy may be suboptimal for reaching the intended goal, it may be optimal for reaching some other goal. In the goal-reaching formalism defined in Equation 1, recall a policy is optimal if it maximizes the probability that the goal is reached at the last timestep of an episode. This notion of optimality doesn't have to take the direct or shortest path to the goal, it simply has to eventually reach the goal. Under this notion of optimality, we can use a simple data relabeling scheme to construct an expert dataset from an arbitrary set of trajectories. Consider a trajectory τ = {s 1, a 1, s 2, a 2, . . ., s T, a T} obtained by commanding the policy π θ (a|s, g, h) to reach some goal g. Although the actions may be suboptimal for reaching the commanded goal g, they do succeed at reaching the states s t+1, s t+2,... that occur later in the observed trajectory. More precisely, for any timestep t and horizon h, the action a t in state s t is likely to be a good action for reaching s t+h in h timesteps, and thus useful supervision for π θ (·|s t, s t+h, h). This step of autonomous relabeling allows us to convert suboptimal trajectories into optimal goal reaching trajectories for different goals, without the need for any human intervention. To obtain a concrete algorithm, we can relabel all such timesteps and horizons in a trajectoryto create an expert dataset according to Because the relabelling procedure is valid for any horizon h ≤ T, we can relabel every such combination to create T 2 optimal tuples of (s, a, g, h) from a single trajectory. This relabeled dataset can then be used to perform goal-conditioned behavioral cloning to update the policy π θ. While performing one iteration of goal conditioned behavioral cloning on the relabeled dataset is not immediately sufficient to reach all desired goals, we will show that this procedure does in fact optimize a lower bound on a well-defined reinforcement learning objective. As described concretely in Algorithm 1, the algorithm proceeds as follows: Sample a goal from a target goal distribution p(g). Execute the current policy π(a|s, g, h) for T steps in the environment to collect a potentially suboptimal trajectory τ. Relabel the trajectory according to the previous paragraph to add T 2 new expert tuples (s t, a t, s t+h, h) to the training dataset. Perform supervised learning on the entire dataset to update the policy π(a|s, g, h) via maximum likelihood. We term this iterative procedure of sampling trajectories, relabelling them, and training a policy until convergence goalconditioned supervised learning (GCSL). Initialize policy π 1 (· | s, g, h) and dataset D((s, a, g, h)) 3: Sample g ∼ p(g) and execute π k in environment trying to reach g 5: Optimize the policy end for 9: end procedure The GCSL algorithm (described above) provides us with an algorithm that can learn to reach goals from the target distribution p(g) simply using iterated behavioral cloning. The ant goal reaching algorithm is off-policy, uses low variance gradients, and is simple to implement and tune without the need for any explicit reward function engineering or demonstrations. Additionally, since this algorithm is off-policy and does not require a value function estimator, it is substantially easier to bootstrap from demonstrations when real demonstrations are available, as our experiments will show in Section 5.4. While the GCSL algorithm is simple to implement, does this algorithm actually solve a well-defined policy learning problem? In this section, we argue that GCSL maximizes a lower bound on the probability for a policy to reach commanded goals. We start by writing the probability that policy π conditioned on goal g produces trajectory τ as π(τ |g) = p(s 0) T h=0 π(a t |s t, g, h)T (s t+1 |s t, a t). We define G(τ) = s T as final state of a trajec-tory. Recalling Equation 1, the target goal-reaching objective we wish to maximize is the probability of reaching a commanded goal: In the language of reinforcement learning, we are optimizing a multi-task problem where the reward in each task is an indicator that a goal was reached. The distribution over tasks (goals) of interest is assumed to be pre-specified as p(g). In the on-policy setting, GCSL performs imitation learning on trajectories commanded by the goals that were reached by the current policy, an objective that can be written as Here, π old corresponds to a copy of π through which gradients do not propagate, following the notation of. Our main , which is derived in the on-policy data collection setting, shows that optimizing J GCSL (π) optimizes a lower bound on the desired objective, J(π) (proof in Appendix B): Theorem 4.1. Let J GCSL and J be as defined above. Then, J(π) ≥ J GCSL (π) + C, Where C is a constant that does not depend on π. Note that to prevent J(π) and J GCSL (π) from being zero, the probability of reaching a goal under π must be nonzero -in scenarios where such a condition does not hold, the bound remains true, albeit vacuous. The tightness of this bound can be controlled by the effective error in the GCSL objective -we present this, alongside a technical analysis of the bound in Appendix B.1. This indicates that in the regime with expressive policies where the loss function can be minimized well, GCSL will improve the expected reward. In our experimental evaluation, we aim to answer the following questions: 1. Does GCSL effectively learn goal-conditioned policies from scratch? 2. Does the performance of GCSL improve over successive iterations? 3. Can GCSL learn goal-conditioned policies from high-dimensional image observations? 4. Can GCSL incorporate demonstration data more effectively than standard RL algorithms? We consider a number of simulated control environments: 2-D room navigation, object pushing with a robotic arm, and the classic Lunar Lander game, shown in Figure 2. The tasks allow us to study the performance of our method under a variety of system dynamics, both low-dimensional state inputs and high-dimensional image observations, and in settings with both easy and difficult exploration. For each task, the target goal distribution corresponds to a uniform distribution over all reachable configurations. Performance of a method is quantified by the distance of the agent to the goal at the last timestep (not by the number of time-steps to the goal as is sometimes considered in goal reaching). We present full details about the environments, evaluation protocol, and hyperparameter choices in Appendix A. For the practical implementation of GCSL, we parametrize the policy as a neural network that takes in state, goal, and horizon as input and outputs a parametrized action distribution. We find that omitting the horizon from the input to the policy still provides good , despite the formulation suggesting that the optimal policy is most likely non-Markovian. We speculate that this is due to optimal actions changing only mildly with different horizons in our tasks. Full details about the implementation for GCSL are presented in Appendix A.1. Figure 3: State-based tasks: GCSL is competitive with state-of-the-art off-policy value function RL algorithms for goal-reaching from low-dimensional sensory observations. Shaded regions denote the standard deviation across 3 random seeds (lower is better). We evaluate the effectiveness of GCSL for reaching goals on the domains visualized in Figure 2, both from low-dimensional proprioception and from images. To better understand the performance of our algorithm, we compare to two families of reinforcement learning algorithms for solving goalconditioned tasks. First, we consider off-policy temporal-difference RL algorithms, particular TD3-HER , which uses hindsight experience replay to more efficiently learn goal-conditioned value functions. TD3-HER requires significantly more machinery than our simple procedure: it maintains a policy, a value function, a target policy, and a target value function, all which are required to prevent degradation of the learning procedure. We also compare with on-policy reinforcement learning algorithms such as TRPO that cannot leverage data relabeling, but often provide more stable optimization than off-policy methods. Because these methods cannot relabel data, we provide an epsilon-ball reward corresponding to reaching the goal. Details for the training procedure for these comparisons, along with hyperpa- rameter and architectural choices, are presented in Appendix A.2. Videos and further details can be found at https://sites.google.com/view/gcsl/. We first investigate the learning performance of these algorithms from low-dimensional sensor observations, as shown in Figure 3. We find that on the pushing and lunar lander domains, GCSL is able to reach a larger set of goals consistently than either RL algorithm. Although TD3 is able to fully solve the navigation task, on the other domains which require synthesis of more challenging control behavior, the algorithm makes slow, if any, learning progress. Given a limited amount of data, TRPO performs poorly as it cannot relabel or reuse data, and so cannot match the performance of the other two algorithms. When scaling these algorithms to image-based domains, which we evaluate in Figure 4, we find that GCSL is still able to learn goal-reaching behaviors on several of these tasks. albeit slower than from state. For most tasks, from both state and images, GCSL is able to reach within 80% of the desired goals and learn at a rate comparable to or better than previously proposed off-policy RL methods. This evaluation demonstrates that simple iterative self-imitation is a competitive scheme for reaching goals in challenging environments which scales favorably with dimensionality of state and complexity of behaviors. To better understand the learning behaviors of the algorithm, we investigate how GCSL performs as we vary the quality and quantity of data, the policy class we optimize over, and the relabelling technique (Figure 5). Full details for these scenarios can be found in Appendix A.4. First, we consider how varying the policy class can affect the performance of GCSL. In Section 5.1, we hypothesized that optimizing over a Markovian policy class would be performant over maintaining a non-Markovian policy. We find that allowing policies to be time-varying ("Time-Varying Policy" in Figure 5) can drastically speed up training on small domains, as these non-Markovian optimal policies can be fit more closely. However, on domains with active exploration challenges such as the Pusher, exploration using time-varying policies is ineffective, and degrades performance. We investigate how the quality of the data in the dataset used to train the policy affects the learned policy. We consider two variations of GCSL: one which collects data using a fixed policy ("Fixed Data Collection" in Figure 5) and another which limits the size of the dataset to be small, forcing all the data to be on-policy ("On-Policy" in Figure 5). When collecting data using a fixed policy, the learning progress of the algorithm demonstratedly decreases, which indicates that the iterative loop of collecting data and training the policy is crucial for converging to a performant solution. By forcing the data to be all on-policy, the algorithm cannot utilize the full set of experiences seen thus far and must discard data. Although this on-policy process remains effective on simple domains, the technique leads to slower learning progress on tasks requiring more challenging control. Because GCSL can perform self-imitation from arbitrary data sources, the algorithm is amenable to initialization from prior exploration or from demonstrations. In this section, we study how GCSL performs when incorporating expert demonstrations as initializations. Our comparing GCSL and TD3 in this setting corroborate the existing hypothesis that off-policy value function RL algorithms are challenging to integrate with initialization from demonstrations Kumar et al. (2019a). We consider the setting where an expert provides a set of demonstration trajectories, each for reaching a different goal. GCSL requires no modifications to incorporate these demonstrations -it simply adds the expert data to the initial dataset, and begins the training procedure. In contrast, multiple prior works have proposed additional algorithmic changes to off-policy TD-based methods to in-corporate data from expert demonstrations (a). We compare the performance of GCSL to one such variant of TD3-HER in incorporating expert demonstrations on the robotic pushing environment in Figure 6 (Details in Appendix A.5). Although TD3 achieves better performance with the demonstrations than when learning from scratch, it drops in performance at the beginning of training, which means TD3 regresses from the initial behavior-cloned policy, an undesirable characteristic for initializing from demonstrations. In contrast, GCSL scales favorably, learns faster than from scratch, and effectively incorporates the expert demonstrations. We believe this benefit largely comes from not needing to train an explicit critic, which can be unstable when trained using highly off-policy data such as demonstrations (b). 6 DISCUSSION AND FUTURE WORK Pusher with Demos: Average Final Distance Figure 6: Initializing from Demonstrations: GCSL is more amenable to initializing using expert demonstrations than value-function RL methods. In this work, we proposed GCSL, a simple algorithm for learning goal-conditioned policies that uses imitation learning, while still learning autonomously from scratch. This method is exceptionally simple, relying entirely on supervised learning to learn policies by relabeling its own previously collected data. This method can easily utilize off-policy data, seamlessly incorporate expert demonstrations when they are available, and can learn directly from image observations. Although several prior works have explored similar algorithm designs in an imitation learning setting , to our knowledge our work is the first to derive a complete iterated algorithm based on this principle for learning from scratch, and the first to theoretically show that this method optimizes a lower bound on a well-defined reinforcement learning objective. While our proposed method is simple, scalable, and readily applicable, it does have a number of limitations. The current instantiation of this approach provides limited facilities for effective exploration, relying entirely on random noise during the rollouts to explore. More sophisticated exploration methods, such as exploration bonuses , are difficult to apply to our method, since there is no explicit reward function that is used during learning. However, a promising direction for future work would be to reweight the sampled rollouts based on novelty to effectively incorporate a novelty-seeking exploration procedure. A further direction for future work is to study whether the simplicity and scalability of our method can make it possible to perform goal-conditioned reinforcement learning on substantially larger and more varied datasets. This can in principle enable wider generalization, and realize a central goal in goal-conditioned reinforcement learning -universal policies that can succeed at a wide range of tasks in diverse environments. A EXPERIMENTAL DETAILS GCSL iteratively performs maximum likelihood estimation using a dataset of relabelled trajectories that have been previously collected by the agent. Here we present details about the policy class, data collection procedure, and other design choices. We parametrize a time-invariant policy using a neural network which takes as input state and goal, and returns probabilities for a discretized grid of actions of the action space. For the state-based domains, the neural network is a feedforward network with two hidden layers of size 400 and 300 respectively. For the image-based domains, both the observation image and the goal image are first preprocessed through three convolutional layers, with kernel size 5, 3, 3 and channels 16, 32, 32 respectively. When executing in the environment, data is sampled according to an exploratory policy which increases the temperature of the current policy: π explore (a|s, g) ∝ π(a|s, g) α. The replay buffer stores trajectories and relabels on the fly, with the size of the buffer subject only to memory constraints. We perform experimental comparisons with TD3-HER . We relabel transitions as ((s, g), a, (s, g)) gets relabelled to ((s, g), a, (s, g)), where g = g with probability 0.1, g = s with probability 0.5, and g = s t for some future state in the trajectory s t with probability 0.4. As described in Section 3, the agent receives a reward of 1 and the trajectory ends if the transition is relabelled to g = s, and 0 otherwise. Under this formalism, the optimal Qfunction, Q * (s, a, g) = exp(−T (s, g)), where T (s, g) is the minimum expected time to go from s to g. Both the Q-function and the actor for TD3 are parametrized as neural networks, with the same architecture (except final layers) for state-based domains and image domains as those for GCSL. We also compare to TRPO , an on-policy RL algorithm. Because TRPO is on-policy, we cannot relabel goals, and so we provide a surrogate -ball indicator reward function: r(s, g) = 1(d(s, g) < ), where is chosen appropriately for each environment. To maximize the data efficiency of TRPO, we performed a coarse hyperparameter sweep over the batch size for the algorithm. Just as with TD3, we mimic the same neural network architecture for the parametrizations of the policies as GCSL. For each environment, the goal space is identical to the state space. For the image-based experiments, images were rendered at resolution 84 × 84 × 3. 2D Room Navigation This environment requires an agent to navigate to points in an environment with four rooms that connect to adjacent rooms. The state space has two dimensions, consisting of the cartesian coordinates of the agent. The agent has acceleration control, and the action space has two dimensions. The distribution of goals p(g) is uniform on the state space, and the agent starts in a fixed location in the bottom left room. Robotic Pushing This environment requires a Sawyer manipulator to move a freely moving block in an enclosed play area with dimensions 40 cm × 20 cm. The state space is 4-dimensional, consistsing of the cartesian coordinates of the end-effector of the sawyer agent and the cartesian coordinates of the block. The Sawyer is controlled via end-effector position control with a threedimensional action space. The distribution of goals p(g) is uniform on the state space (uniform block location and uniform end-effector location), and the agent starts with the block and end-effector both in the bottom-left corner of the play area. Lunar Lander This environment requires a rocket to land in a specified region. The state space includes the normalized position of the rocket, the angle of the rocket, whether the legs of the rocket are touching the ground, and velocity information. Goals are sampled uniformly along the landing region, either touching the ground or hovering slightly above, with zero velocity. In Section 5.3, we analyzed the performance of the following variants of GCSL (Figure 7). 1. Inverse Model -This model relabels only states and goals that are one step apart: 2. On-Policy Only the most recent 10000 transitions are stored and trained on. 3. Fixed Data Collection Data is collected according to a uniform policy over actions. 4. Time-Varying Policy Policies are are conditioned on the remaining horizon. Alongside the state and goal, the policy gets a reverse temperature encoding of the remaining horizon as input. We train an expert policy for robotic pushing using TRPO with a shaped dense reward function, and collect a dataset of 200 trajectories, each corresponding to a different goal. To train GCSL using these demonstrations, we simply populate the replay buffer with these trajectories at the beginning of training, and optimize the GCSL objective using these trajectories to warm-start the algorithm. Initializing a value function method using demonstrates requires significantly more attention: we perform the following procedure. First, we perform goal-conditioned behavior cloning to learn an initial policy π BC. Next, we collect 200 new trajectories in the environment using a uniform data collection scheme. Using this dataset of 400 trajectories, we perform policy evaluation on π BC to learn Q π BC using policy evaluation via bootstrapping. Having trained such an estimate of the Qfunction, we initialize the policy and Q-function to these estimates, and run the appropriate value function RL algorithm. B PROOF OF THEOREM 4.1 We will assume a discrete state space in this proof, and denote a trajectory as τ = {s 0, a 0, . . ., s T, a T}. Let the notation G(τ) = s T denote the final state of a trajectory, which represents the goal that the trajectory reached. As there can be multiple paths to a goal, we let τ g = {τ : G(τ) = g} denote the set of trajectories that reach a particular goal g. We abbreviate a policy's trajectory distribution as π(τ |g) = p(s 0) T t=0 π(a t |s t, g)T (s t+1 |s t, a t). The target goal-reaching objective we wish to optimize is the probability of reaching a commanded goal, The distribution over tasks (goals) is assumed to be pre-specified as p(g). GCSL optimizes the following objective, where the log-likelihood of the actions conditioned on the goals actually reached by the policy, G(τ): Here, using notation from , π old is a copy of the policy π through which gradients do not propagate. To analyze how this objective relates to J(π), we first analyze the relationship between J(π) and a surrogate objective, given by As J surr (π) and J(π) have the same gradient for all π, the differ by some π-independent constant C 1, i.e. J(π) = J surr (π) + C 1. We can now lower-bound the surrogate objective via the following: The final line is our goal-relabeling objective: we train the policy to reach goals we reached g. The inequality holds since log π(τ) is always negative. The inequality is loose by a term related to the probability of not reaching the commanded goal. Since the initial state and transition probabilities do not depend on the policy, we can simplify log π(τ |G(τ)) as (by absorbing non π-dependent terms into C 2): Combining this with the bound on the expected return completes the proof, namely that J(π) ≥ J GCSL (π) + C 1 + C 2. Note that in order for J(π) and J GCSL (π) to not be degenerate, the probability of reaching a goal under π old must be non-zero. This assumption is reasonable, and matches the assumptions on "exploratory data-collection" and full-support policies that are required by Q-learning and policy gradient convergence guarantees. We now seek to better understand the gap introduced by Equation 3 in the analysis above. We define P π old (G(τ) = g) to be the probability of failure under π old and p(g). We overload the notation π old (τ) = E g [π old (τ |g)], and additionally define p wrong (τ) and p right (τ)the conditional distribution of trajectories under π old given that it did not reach and did the commanded goal respectively. In the following section, we show that the gap can be controlled by the probability of making a mistake, P π old (G(τ) = g), and D T V (p wrong (τ), p right (τ)), a measure of the difference between the distribution of trajectories that must be relabelled and those not. We rewrite Equation 3 as follows: Define D to be the Radon-Nikodym derivative of p wrong (τ) wrt π old (τ) = E τ ∼π old [log π(τ |G(τ))] − P π old (G(τ) = g)) E τ ∼π old [D log π(τ |G(τ))] = (1 − P π old (G(τ) = g))E τ ∼π old [log π(τ |G(τ))] + P π old (G(τ) = g)) E τ ∼π old [(1 − D) log π(τ |G(τ))] Relevant Gap The first term is affine with respect to the GCSL loss, so the second term is the error we seek to understand. The inequality is maintained because of the nonpositivity of log π(τ), and the final step holds because π old (τ) is a mixture of p wrong (τ) and p right (τ). This derivation shows that the gap between J surr and J GCSL (up to affine consideration) can be controlled by 1) the probability of reaching the wrong goal and 2) the divergence between the conditional distribution of good trajectories and those which must be relabelled. As either term goes to 0, the bound becomes tight. We now show that sufficiently optimizing the GCSL objective causes the probability of reaching the wrong goal to be bounded close to 0, and thus bounds the gap close to 0. Suppose we collect trajectories from a policy π data. Mimicking the notation from before, we define π data (τ) = E g∼p(g) [π data (τ |g)]. For convenience, we define π * data (a t |s t, g) ∝ τ \at π data (τ)1(G(τ) = g)1(s t (τ) = s t ) to be the conditional distribution of actions for a given state given that the goal g is reached at the end of the trajectory. If this conditional distribution is not defined, we let π * data (a t |s t, g) be uniform, so that π * data (a t |s t, g) is well-defined for all states, goals, and timesteps. Lemma B.1. Consider an environment with deterministic dynamics in which all goals are reachable in T timesteps, and a behavior policy π data which is exploratory: π data (a t |s t, g) > 0 for all (a, s, g) (for example epsilon-greedy exploration). Suppose the GCSL objective is sufficiently optimized, so that for all s, g ∈ S, and time-steps t ∈ {1, . . ., T}, D T V (π(a t |s t, g), π * data (a t |s t, g)) ≤. Then, the probability of making a mistake P π (G(τ) = g) can be bounded above by T Proof. We show the through a coupling argument, similar to;;. Because D T V (π(a t |s t, g), π * data (a t |s t, g)) ≤, we can define a (1 −)-coupled policy pair (π, π * data), which takes differing actions with probability. By a union bound over all timesteps, the probability that π and π * data take any different actions throughout the trajectory is bounded by T, and under assumptions of deterministic dynamics, take the same trajectory with probability 1 − T. Under assumption of deterministic dynamics and all goals being reachable from the initial state distribution in T timesteps, the policy π * data (a t |s t, g) satisfies P π * data (G(τ) = g) = 0. Because π * data reaches the goal with probability 1, this implies that π must also reach the goal with probability at least 1 − T. Thus, P π (G(τ) = g) ≤ T. Figure 8 below shows parts of the state along trajectories produced by GCSL. In Lunar Lander, this state is captured by the rocket's position, and in 2D Room Navigation it is the agent's position. While these trajectories do not always take the shortest path to the goal, they do often take fairly direct paths to the goal from the initial position avoiding very roundabout trajectories. | Learning how to reach goals from scratch by using imitation learning with data relabeling | 643 | scitldr |
Neural networks have recently shown excellent performance on numerous classi- fication tasks. These networks often have a large number of parameters and thus require much data to train. When the number of training data points is small, however, a network with high flexibility will quickly overfit the training data, ing in a large model variance and a poor generalization performance. To address this problem, we propose a new ensemble learning method called InterBoost for small-sample image classification. In the training phase, InterBoost first randomly generates two complementary datasets to train two base networks of the same structure, separately, and then next two complementary datasets for further training the networks are generated through interaction (or information sharing) between the two base networks trained previously. This interactive training process continues iteratively until a stop criterion is met. In the testing phase, the outputs of the two networks are combined to obtain one final score for classification. Detailed analysis of the method is provided for an in-depth understanding of its mechanism. Image classification is an important application of machine learning and data mining. Recent years have witnessed tremendous improvement in large-scale image classification due to the advances of deep learning BID15 BID17 BID7 BID4. Despite recent breakthroughs in applying deep networks, one persistent challenge is classification with a small number of training data points BID12. Small-sample classification is important, not only because humans learn a concept of class without millions or billions of data but also because many kinds of real-world data have a small quantity. Given a small number of training data points, a large network will inevitably encounter the overfitting problem, even when dropout BID16 and weight decay are applied during training BID19. This is mainly because a large network represents a large function space, in which many functions can fit a given small-sample dataset, making it difficult to find the underlying true function that is able to generalize well. As a , a neural network trained with a small number of data points usually exhibits a large variance. Ensemble learning is one way to reduce the variance. According to bias-variance dilemma BID2, there is a trade-off between the bias and variance contributions to estimation or classification errors. The variance is reduced when multiple models or ensemble members are trained with different datasets and are combined for decision making, and the effect is more pronounced if ensemble members are accurate and diverse BID3.There exist two classic strategies of ensemble learning BID21 BID13. The first one is Bagging BID20 and variants thereof. This strategy trains independent classifiers on bootstrap re-samples of training data and then combines classifiers based on some rules, e.g. weighted average. Bagging methods attempt to obtain diversity by bootstrap sampling, i.e. random sampling with replacement. There is no guarantee to find complementary ensemble members and new datasets constructed by bootstrap sampling will contain even fewer data points, which can potentially make the overfitting problem even more severe. The second strategy is Boosting BID14 BID10 and its variants. This strategy starts from a classifier trained on the available data and then sequentially trains new member classifiers. Taking Adaboost BID20 as an example, a classifier in Adaboost is trained according to the training error rates of previous classifiers. Adaboost works well for weak base classifiers. If the base classifier is of high complexity, such as a large neural network, the first base learner will overfit the training data. Consequently, either the Adaboost procedure is stopped or the second classifier has to be trained on data with original weights, i.e. to start from the scratch again, which in no way is able to ensure the diversity of base networks. In addition, there also exist some "implicit" ensemble methods in the area of neural networks. Dropout BID16, DropConnect BID18 and Stochastic Depth techniques BID5 create an ensemble by dropping some hidden nodes, connections (weights) and layers, respectively. Snapshot Ensembling BID6 ) is a method that is able to, by training only one time and finding multiple local minima of objective function, get many ensemble members, and then combines these members to get a final decision. Temporal ensembling, a parallel work to Snapshot Ensembling, trains on a single network, but the predictions made on different epochs correspond to an ensemble prediction of multiple sub-networks because of dropout regularization BID8. These works have demonstrated advantages of using an ensemble technique. In these existing "implicit" ensemble methods, however, achieving diversity is left to randomness, making them ineffective for small-sample classification. Therefore, there is a need for new ensemble learning methods able to train diverse and complementary neural networks for small-sample classification. In this paper, we propose a new ensemble method called InterBoost for training two base neural networks with the same structure. In the method, the original dataset is first re-weighted by two sets of complementary weights. Secondly, the two base neural networks are trained on the two re-weighted datasets, separately. Then we update training data weights according to prediction scores of the two base networks on training data, so there is an interaction between the two base networks during the training process. When base networks are trained interactively with the purpose of deliberately pushing each other in opposite directions, they will be complementary. This process of training network and updating weights is repeated until a stop criterion is met. In this paper, we present the training and test procedure of the proposed ensemble method and evaluate it on the UIUC-Sports dataset BID9 ) and the LabelMe dataset BID11 with a comparison to Bagging, Adaboost, SnapShot Ensembling and other existing methods. In this section, we present the proposed method in detail, followed by discussion. We are given a training dataset {x d, y d}, d∈ {1, 2, ..., D}, where y d is the true class label of x d. We assign a weight to the point {x d, y d}, which is used for re-weighting the loss of the data point in the loss function of neural network. It is equivalent to changing the distribution of training dataset and thus changing the optimization objective of neural network. We randomly assign a weight 0 < W 1d < 1 to {x d, y d} for training the first base network, and then assign a complementary weight W 2d = 1 − W 1d to {x d, y d} for training the second base network. The core idea of the InterBoost method is to train two base neural networks interactively. This is in contrast to Boosting, where base networks are typically trained in sequence, namely the subsequent network or learner is trained on a dataset with new data weights that are updated using the error rate performance of the previous base network. As shown in Figure 1, the procedure contains multiple iterations. It first trains a number of epochs for two base networks using two complementary datasets DISPLAYFORM0, 2,..., D}, separately, and then iteratively update data weights based on the probabilities that the two base networks classify DISPLAYFORM1 2 ), where θ 1 and θ 2 are parameters of the two base networks. During the iterative process, the weights always have the DISPLAYFORM2 Figure 1: InterBoost training procedure. n is the number of iteration. W 1d and W 2d are the weights of data point {x d, y d}, d∈ {1, 2, ..., D} for two base networks. θ 1 and θ 2 are the parameters of two base neural networks. DISPLAYFORM3, 2} is the probability that the ith base network can classify x d correctly after nth iteration.constraints W 1d + W 2d = 1 and 0 < W 1d, W 2d < 1. That is, they are always kept complementary to ensure the trained networks are complementary. Training networks and updating data weights run alternately until a stop condition is met. To compute θ DISPLAYFORM4 in the nth iteration, we minimize weighted loss functions are follows. DISPLAYFORM5 DISPLAYFORM6 where DISPLAYFORM7 are loss functions of of x d for two base networks, respectively. To update W 1d and W 2d, we devise the following updating rule: If the prediction probability of a data point in one base network is higher than that in another, its weight in next iteration for training this network will be smaller than its weight for training another base network. In this way, a base network will be assigned a larger weight for a data point on which it does not perform well. Hence the interaction make it be trained on diverse datasets in sequence, which can be considered as "implicit" Adaboost. Moreover, considering the fact that the two networks are always trained based on loss functions with different data weights, this interaction makes them diverse and complementary. Figure 2: Function w 1 = p 2 /(p 1 + p 2) (left) and function w 1 = ln(p 1)/(ln(p 1) + ln(p 2)) (right), where 0 < p 1, p 2 < 1.To implement the rule of updating data weights, a simple method is to use function w 1 = p 2 /(p 1 + p 2), and then assign W 1d = w 1 and W 2d = 1 − W 1d. Here, for convenience, we use p 1 and p 2 to represent the probabilities that the point x d is classified by the two base networks correctly. Moreover, this is problematic, as illustrated on the left side of Figure 2. For example, when both p 1 and p 2 are large and close to each other, w 1 will be close to 0.5. In this situation, there will be no big difference between W 1d and W 2d. In addition, this situation will occur frequently as neural networks with high flexibility will fit the data well. As a , the function have difficulty to make a data point have different weights in two base networks. Instead, we use function w 1 = ln(p 1)/(ln(p 1) + ln(p 2)), as shown on the right side of Figure 2, to update data weights. It is observed that the function is more sensitive to the small differences between p 1 and p 2 when they are both large. Specifically, for {x d, y d}, d ∈ {1, 2, ..., D}, we update its weights W1d and W2d by Equation and. DISPLAYFORM8 DISPLAYFORM9 The training procedure of InterBoost is described in Algorithm 1. First, two base networks are trained by minimizing loss functions L 1 and L 2, respectively. Secondly, weights of data point on training data are recalculated using Equation and on the basis of the prediction from two base networks. We repeat the two steps until the proposed ensemble network achieves a predefined performance on the validation dataset or the maximum iteration number is reached. Input: DISPLAYFORM0 and maximum number of iterations N. Initialize weights for each data point, W1d, W2d, and parameters of two base neural networks θ, d ∈ {1, 2, ..., D}, according to and Computing accuracy on V, temp acc, by if temp acc ≥ val acc then val acc ← temp acc DISPLAYFORM0 end if until val acc == 1 or n == N return Parameters of two base neural networks, θ 1 and θ 2 Through the interactive and iterative training process, the two networks are expected to be well trained over various regions of the problem space, represented by the data. In other words, they become "experts" with different knowledge. Therefore, we adopt a simple fusion strategy of linearly combining the prediction of two networks with equal weights and choose the index class with a maximum prediction value as the final label, as detailed in. DISPLAYFORM0 where P (c | x new, θ i), i ∈ {1, 2} is the cth class probability of the unseen data point x new from the ith network, and O(x new) is the final classification label of the point x new. Because base networks or "experts" have different knowledge, in the event that one base network makes a wrong decision, it is quite possible that another network will correct it. Complementary datasets DISPLAYFORM0 2d } Figure 3: Generated training datasets during the InterBoost training process. The datasets on the left side are for the first base network, and the datasets on the right side are for the second base network. During the training process, we always keep the constraints W 1d +W 2d = 1 and 0 < W 1d, W 2d < 1, to ensure the base networks diverse and complementary. Equation FORMULA10 and FORMULA11 are designed for updating weights of data points, so that the weight updating rule is sensitive to small differences between prediction probabilities from two base networks to prevent premature training. Furthermore, if the prediction of a data point in one network is more accurate than another network, its weight in next round will be smaller than its weight for another network, thus making the training of individual network on more different regions. The training process generates many diverse training dataset pairs, as shown in Figure 3. That is, each base network will be trained on these diverse datasets in sequence, which is equivalent to that an "implicit" ensemble is applied on each base network. Therefore, the base network will get more and more accurate during training process. At the same time, the two networks are complementary to each other. In each iteration, determination of the number of epochs for training base networks is also crucial. If the number is too large, the two base networks will fit training data too well, making it difficult to change data weights of to generate diverse datasets. If it is too small, it is difficult to obtain accurate base classifiers. In experiments, we find that a suitable epoch number in each iteration is the ones that make the classification accuracy of the base network fall in the interval of (0.9, 0.98).Similar to Bagging and Adaboost, our method has no limitation on the type of neural networks. In addition, it is straightforward to extend the proposed ensemble method for multiple networks, just by keeping DISPLAYFORM0.., D}, in which H is the number of base networks and 0 < W id < 1. Considering our focus on small-sample image classification, we choose the following datasets.• LabelMe datase (LM): A subset of scene classification dataset from BID11.The dataset contains 8 classes of natural scene images: coast, forest, highway, inside city, mountain, open country, street and tall building. We randomly select 210 images for each class, so the total number of images is 1680.• UIUC-Sports dataset (UIUC): A 8 class sports events classification dataset 1 from BID9. The dataset contains 8 classes of sports scene images. The total number of images is 1579. The numbers of images for different classes are: bocce, polo, rowing, sailing, snowboarding, rock climbing, croquet and badminton.For the LM dataset, we split the whole dataset into three parts: training, validation and test datasets. Both training and test datasets contain 800 data points, in which each class contains 100 data points. The validation dataset contains 8 classes, and each class contains 10 data points. For the UIUC dataset, we also split the whole dataset into three parts as above. In this dataset, the number of data points in each class, however, is different. We first randomly choose 10 data points for every class to form validation dataset, ing in 80 data points in total. The remaining parts of the dataset are divided equally into training and test datasets. For small-sample classification, good discriminative features are crucial. For both LM and UIUC datasets, we first resize the images into the same size of 256 × 256 and then directly use the VGG16 BID15 network trained on the ImageNet dataset without any additional tunning, to extract image features. Finally, we only reserve the features of last convolutional layer and simply flatten them. Hence, final feature dimensions for each image is 512 × 8 × 8 = 32768. Considering the small number of data points in the two datasets, we only use fully connected network with two layers. In the first layer, the activation function is Rectified Linear Unit function (Relu). In the second layer, the activation function is Softmax. We tried different numbers of hidden units, from 1024 to 32, and found overfitting is more severe if the number of hidden units is larger. Finally, we set the number of hidden layer units as 32. We did not adopt the dropout technique, simply because we found there was no difference between with and without dropout, and we set the parameter of L 2 norm about network weights as 0.01. We used minibatch gradient descent to minimize the softmax cross entropy loss. The optimization algorithm is RMSprop, the initial learning rate is 0.001, and the batch size is 32. In order to evaluate the classification performance of the proposed InterBoost method on LM and UIUC datasets, we use the training, validation and test datasets described above, and compare it with SVM with a polynomial kernel (SVM), Softmax classifier BID0, Fully connected network (FC), Bagging, Adaboost and SnapShot Ensembling (SnapShot) BID6.For SnapShot, we adopt the code published by the author of it. For Softmax, FC, Bagging, Adaboost and our method, we implement them based on Keras framework BID1, in which FC is the base network of Bagging, Adaboost, SnapShot and the proposed method. The code of the proposed method and the codes of the other referred methods and the features of two datasets used in the experiment can be found on an anonymous webpage 2 on DropBox.On the two datasets, we test different epochs ranging from 50 to 800 for Adaboost. It is found that the performance is similar when the epoch number of training base networks is set as 800 and 500. Hence, epoch numbers for FC, Softmax and the base network of bagging are also set as 800. For SnapShot, we get a snapshot network every 800 epochs, and the number of snapshot is 2. For the base networks of our method, we choose 8 iterations, each iteration has 100 epochs, and thus the total epoch number remains the same as that of Bagging, Adaboost and Snapshot. SnapShot does not use a validation dataset, so we merge the train and validation datasets into a new training dataset while the test dataset remains unchanged. We run these methods on the two datasets 60 rounds each. The average accuracies are reported in TAB0. Meanwhile, to further evaluate the robustness and stability of the proposed method and other referred methods, we show box plots of accuracies obtained by FC, Bagging, Adaboost, SnapShot and our method on the two datasets in FIG2.From TAB0, it can be seen that our method obtains performance with an accuracy of 89.0% on the UIUC dataset, and with an accuracy of 86.4% on the LM dataset. On UIUC dataset, our method performs better than Bagging and Softmax, similarly to FC and worse than Adaboost, Snapshot and SVM. On the LM dataset, our method performs better than FC, Bagging and Softmax, but worse than Adaboost, Snapshot and SVM. FIG2, our method does not have overall superior performance to other baseline methods on both LM and UIUC datasets. In the paper, we have proposed an ensemble method called InterBoost for training neural networks for small-sample classification and detailed the training and test procedures. In the training procedure, the two base networks share information with each other in order to push each other optimized in different directions. At the same time, each base network is trained on diverse datasets iteratively. Experimental on UIUC-Sports (UIUC) and LabelMe (LM) datasets showed that our ensemble method does not outperform other ensemble methods. Future work includes improving the proposed method, increasing the number of networks, experimenting on different types of network as well as different kinds of data to evaluate the effectiveness of the InterBoost method. | In the paper, we proposed an ensemble method called InterBoost for training neural networks for small-sample classification. The method has better generalization performance than other ensemble methods, and reduces variances significantly. | 644 | scitldr |
Interpreting neural networks is a crucial and challenging task in machine learning. In this paper, we develop a novel framework for detecting statistical interactions captured by a feedforward multilayer neural network by directly interpreting its learned weights. Depending on the desired interactions, our method can achieve significantly better or similar interaction detection performance compared to the state-of-the-art without searching an exponential solution space of possible interactions. We obtain this accuracy and efficiency by observing that interactions between input features are created by the non-additive effect of nonlinear activation functions, and that interacting paths are encoded in weight matrices. We demonstrate the performance of our method and the importance of discovered interactions via experimental on both synthetic datasets and real-world application datasets. Despite their strong predictive power, neural networks have traditionally been treated as "black box" models, preventing their adoption in many application domains. It has been noted that complex machine learning models can learn unintended patterns from data, raising significant risks to stakeholders BID43. Therefore, in applications where machine learning models are intended for making critical decisions, such as healthcare or finance, it is paramount to understand how they make predictions BID6 BID17.Existing approaches to interpreting neural networks can be summarized into two types. One type is direct interpretation, which focuses on 1) explaining individual feature importance, for example by computing input gradients BID37 BID34 BID40 or by decomposing predictions BID2 BID36, 2) developing attention-based models, which illustrate where neural networks focus during inference BID20 BID27 BID47, and 3) providing model-specific visualizations, such as feature map and gate activation visualizations BID48 BID21. The other type is indirect interpretation, for example post-hoc interpretations of feature importance BID32 and knowledge distillation to simpler interpretable models BID7.It has been commonly believed that one major advantage of neural networks is their capability of modeling complex statistical interactions between features for automatic feature learning. Statistical interactions capture important information on where features often have joint effects with other features on predicting an outcome. The discovery of interactions is especially useful for scientific discoveries and hypothesis validation. For example, physicists may be interested in understanding what joint factors provide evidence for new elementary particles; doctors may want to know what interactions are accounted for in risk prediction models, to compare against known interactions from existing medical literature. In this paper, we propose an accurate and efficient framework, called Neural Interaction Detection (NID), which detects statistical interactions of any order or form captured by a feedforward neural network, by examining its weight matrices. Our approach is efficient because it avoids searching over an exponential solution space of interaction candidates by making an approximation of hidden unit importance at the first hidden layer via all weights above and doing a 2D traversal of the input weight matrix. We provide theoretical justifications on why interactions between features are created at hidden units and why our hidden unit importance approximation satisfies bounds on hidden unit gradients. Top-K true interactions are determined from interaction rankings by using a special form of generalized additive model, which accounts for interactions of variable order BID46 BID25. Experimental on simulated datasets and real-world datasets demonstrate the effectiveness of NID compared to the state-of-the-art methods in detecting statistical interactions. The rest of the paper is organized as follows: we first review related work and define notations in Section 2. In Section 3, we examine and quantify the interactions encoded in a neural network, which leads to our framework for interaction detection detailed in Section 4. Finally, we study our framework empirically and demonstrate its practical utility on real-world datasets in Section 5. Statistical interaction detection has been a well-studied topic in statistics, dating back to the 1920s when two-way ANOVA was first introduced BID11. Since then, two general approaches emerged for conducting interaction detection. One approach has been to conduct individual tests for each combination of features BID25. The other approach has been to pre-specify all interaction forms of interest, then use lasso to simultaneously select which are important BID42 BID4 BID30.Notable approaches such as ANOVA and Additive Groves BID39 belong to the first group. Two-way ANOVA has been a standard method of performing pairwise interaction detection that involves conducting hypothesis tests for each interaction candidate by checking each hypothesis with F-statistics BID45. Besides two-way ANOVA, there is also threeway ANOVA that performs the same analyses but with interactions between three variables instead of two; however, four-way ANOVA and beyond are rarely done because of how computationally expensive such tests become. Specifically, the number of interactions to test grows exponentially with interaction order. Additive Groves is another method that conducts individual tests for interactions and hence must face the same computational difficulties; however, it is special because the interactions it detects are not constrained to any functional form e.g. multiplicative interactions. The unconstrained manner by which interactions are detected is advantageous when the interactions are present in highly nonlinear data BID38. Additive Groves accomplishes this by comparing two regression trees, one that fits all interactions, and the other that has the interaction of interest forcibly removed. In interaction detection, lasso-based methods are popular in large part due to how quick they are at selecting interactions. One can construct an additive model with many different interaction terms and let lasso shrink the coefficients of unimportant terms to zero BID42. While lasso methods are fast, they require specifying all interaction terms of interest. For pairwise interaction detection, this requires O(p 2) terms (where p is the number of features), and O(2 p) terms for higherorder interaction detection. Still, the form of interactions that lasso-based methods capture is limited by which are pre-specified. Our approach to interaction detection is unlike others in that it is both fast and capable of detecting interactions of variable order without limiting their functional forms. The approach is fast because it does not conduct individual tests for each interaction to accomplish higher-order interaction detection. This property has the added benefit of avoiding a high false positive-, or false discovery rate, that commonly arises from multiple testing BID3. The interpretability of neural networks has largely been a mystery since their inception; however, many approaches have been developed in recent years to interpret neural networks in their traditional feedforward form and as deep architectures. Feedforward neural networks have undergone multiple advances in recent years, with theoretical works justifying the benefits of neural network depth (; BID23 BID33 and new research on interpreting feature importance from input gradients BID18 BID34 BID40 . Deep architectures have seen some of the greatest breakthroughs, with the widespread use of attention mechanisms in both convolutional and recurrent architectures to show where they focus on for their inferences BID20 BID27 BID47 . Methods such as feature map visualization BID48, de-convolution , saliency maps BID37, and many others have been especially important to the vision community for understanding how convolutional networks represent images. With long short-term memory networks (LSTMs), a research direction has studied multiplicative interactions in the unique gating equations of LSTMs to extract relationships between variables across a sequence BID1 BID28. DISPLAYFORM0 Figure 1: An illustration of an interaction within a fully connected feedforward neural network, where the box contains later layers in the network. The first hidden unit takes inputs from x 1 and x 3 with large weights and creates an interaction between them. The strength of the interaction is determined by both incoming weights and the outgoing paths between a hidden unit and the final output, y. Unlike previous works in interpretability, our approach extracts generalized non-additive interactions between variables from the weights of a neural network. Vectors are represented by boldface lowercase letters, such as x, w; matrices are represented by boldface capital letters, such as W. The i-th entry of a vector w is denoted by w i, and element (i, j) of a matrix W is denoted by W i,j. The i-th row and j-th column of W are denoted by W i,: and W:,j, respectively. For a vector w ∈ R n, let diag(w) be a diagonal matrix of size n × n, where {diag(w)} i,i = w i. For a matrix W, let |W| be a matrix of the same size where DISPLAYFORM0 Let [p] denote the set of integers from 1 to p. An interaction, I, is a subset of all input features [p] with |I| ≥ 2. For a vector w ∈ R p and I ⊆ [p], let w I ∈ R |I| be the vector restricted to the dimensions specified by I. 1 Consider a feedforward neural network with L hidden layers. Let p be the number of hidden units in the -th layer. We treat the input features as the 0-th layer and p 0 = p is the number of input features. There are L weight matrices DISPLAYFORM0 and L bias vectors b ∈ R p, = 1, 2,..., L. Let φ (·) be the activation function (nonlinearity), and let w y ∈ R p L and b y ∈ R be the coefficients and bias for the final output. Then, the hidden units h of the neural network and the output y with input x ∈ R p can be expressed as: DISPLAYFORM1 We can construct a directed acyclic graph G = (V, E) based on non-zero weights, where we create vertices for input features and hidden units in the neural network and edges based on the non-zero entries in the weight matrices. See Appendix A for a formal definition. A statistical interaction describes a situation in which the joint influence of multiple variables on an output variable is not additive BID8 BID39 DISPLAYFORM0 For example, in x 1 x 2 +sin (x 2 + x 3 + x 4), there is a pairwise interaction {1, 2} and a 3-way higherorder interaction {2, 3, 4}, where higher-order denotes |I| ≥ 3. Note that from the definition of statistical interaction, a d-way interaction can only exist if all its corresponding (d − 1)-interactions exist BID39. For example, the interaction {1, 2, 3} can only exist if interactions {1, 2}, {1, 3}, and {2, 3} also exist. We will often use this property in this paper. In feedforward neural networks, statistical interactions between features, or feature interactions for brevity, are created at hidden units with nonlinear activation functions, and the influences of the interactions are propagated layer-by-layer to the final output (see Figure 1). In this section, we propose a framework to identify and quantify interactions at a hidden unit for efficient interaction detection, then the interactions are combined across hidden units in Section 4. In feedforward neural networks, any interacting features must follow strongly weighted connections to a common hidden unit before the final output. That is, in the corresponding directed graph, interacting features will share at least one common descendant. The key observation is that nonoverlapping paths in the network are aggregated via weighted summation at the final output without creating any interactions between features. The statement is rigorized in the following proposition and a proof is provided in Appendix A. The reverse of this statement, that a common descendant will create an interaction among input features, holds true in most cases. Proposition 2 (Interactions at Common Hidden Units). Consider a feedforward neural network with input feature DISPLAYFORM0, there exists a vertex v I in the associated directed graph such that I is a subset of the ancestors of v I at the input layer (i.e., = 0).In general, the weights in a neural network are nonzero, in which case Proposition 2 blindly infers that all features are interacting. For example, in a neural network with just a single hidden layer, any hidden unit in the network can imply up to 2 Wj,: 0 potential interactions, where W j,: 0 is the number of nonzero values in the weight vector W j,: for the j-th hidden unit. Managing the large solution space of interactions based on nonzero weights requires us to characterize the relative importance of interactions, so we must mathematically define the concept of interaction strength. In addition, we limit the search complexity of our task by only quantifying interactions created at the first hidden layer, which is important for fast interaction detection and sufficient for high detection performance based on empirical evaluation (see evaluation in Section 5.2 and TAB3).Consider a hidden unit in the first layer: φ w x + b, where w is the associated weight vector and x is the input vector. While having the weight w i of each feature i, the correct way of summarizing feature weights for defining interaction strength is not trivial. For an interaction I ⊆ [p], we propose to use an average of the relevant feature weights w I as the surrogate for the interaction strength: µ (|w I |), where µ (·) is the averaging function for an interaction that represents the interaction strength due to feature weights. We provide guidance on how µ should be defined by first considering representative averaging functions from the generalized mean family: maximum value, root mean square, arithmetic mean, geometric mean, harmonic mean, and minimum value BID5. These options can be narrowed down by accounting for intuitive properties of interaction strength: 1) interaction strength is evaluated as zero whenever an interaction does not exist (one of the features has zero weight); 2) interaction strength does not decrease with any increase in magnitude of feature weights; 3) interaction strength is less sensitive to changes in large feature weights. While the first two properties place natural constraints on interaction strength behavior, the third property is subtle in its intuition. Consider the scaling between the magnitudes of multiple feature weights, where one weight has much higher magnitude than the others. In the worst case, there is one large weight in magnitude while the rest are near zero. If the large weight grows in magnitude, then interaction strength may not change significantly, but if instead the smaller weights grow at the same rate, then interaction strength should strictly increase. As a , maximum value, root mean square, and arithmetic mean should be ruled out because they do not satisfy either property 1 or 3. Our definition of interaction strength at individual hidden units is not complete without considering their outgoing paths, because an outgoing path of zero weight cannot contribute an interaction to the final output. To propose a way of quantifying the influence of an outgoing path on the final output, we draw inspiration from Garson's algorithm BID14 BID15, which instead of computing the influence of a hidden unit, computes the influence of features on the output. This is achieved by cumulative matrix multiplications of the absolute values of weight matrices. In the following, we propose our definition of hidden unit influence, then prove that this definition upper bounds the gradient magnitude of the hidden unit with its activation function. To represent the influence of a hidden unit i at the -th hidden layer, we define the aggregated weight z DISPLAYFORM0 where z ∈ R p. This definition upper bounds the gradient magnitudes of hidden units because it computes Lipschitz constants for the corresponding units. Gradients have been commonly used as variable importance measures in neural networks, especially input gradients which compute directions normal to decision boundaries BID34 BID16 BID37. Thus, an upper bound on the gradient magnitude approximates how important the variable can be. A full proof is shown in Appendix C. Lemma 3 (Neural Network Lipschitz Estimation). Let the activation function φ (·) be a 1-Lipschitz function. Then the output y is z DISPLAYFORM1 We now combine our definitions from Sections 3.1 and 3.2 to obtain the interaction strength ω i (I) of a potential interaction I at the i-th unit in the first hidden layer h DISPLAYFORM0.(Note that ω i (I) is defined on a single hidden unit, and it is agnostic to scaling ambiguity within a ReLU based neural network. In Section 4, we discuss our scheme of aggregating strengths across hidden units, so we can compare interactions of different orders. In this section, we propose our feature interaction detection algorithm NID, which can extract interactions of all orders without individually testing for each of them. Our methodology for interaction detection is comprised of three main steps: 1) train a feedforward network with regularization, 2) interpret learned weights to obtain a ranking of interaction candidates, and 3) determine a cutoff for the top-K interactions. Data often contains both statistical interactions and main effects BID44. Main effects describe the univariate influences of variables on an outcome variable. We study two architectures: MLP and MLP-M. MLP is a standard multilayer perceptron, and MLP-M is an MLP with additional univariate networks summed at the output (Figure 2). The univariate networks are intended to discourage the modeling of main effects away from the standard MLP, which can create spurious interactions using the main effects. When training the neural networks, we apply sparsity regularization on the MLP portions of the architectures to 1) suppress unimportant interacting paths and 2) push the modeling of main effects towards any univariate networks. We note that this approach can also generalize beyond sparsity regularization (Appendix G). Input: input-to-first hidden layer weights W, aggregated weights z DISPLAYFORM0 Output: ranked list of interaction candidates DISPLAYFORM1 1: d ← initialize an empty dictionary mapping interaction candidate to interaction strength 2: for each row w of W indexed by r do 3:for j = 2 to p do I ← sorted indices of top j weights in w 5: DISPLAYFORM0 DISPLAYFORM1Figure 2: Standard feedforward neural network for interaction detection, with optional univariate networksWe design a greedy algorithm that generates a ranking of interaction candidates by only considering, at each hidden unit, the top-ranked interactions of every order, where 2 ≤ |I| ≤ p, thereby drastically reducing the search space of potential interactions while still considering all orders. The greedy algorithm (Algorithm 1) traverses the learned input weight matrix W across hidden units and selects only top-ranked interaction candidates per hidden unit based on their interaction strengths (Equation 1). By selecting the top-ranked interactions of every order and summing their respective strengths across hidden units, we obtain final interaction strengths, allowing variable-order interaction candidates to be ranked relative to each other. For this algorithm, we set the averaging function µ (·) = min (·) based on its performance in experimental evaluation (Section 5.1).With the averaging function set to min (·), Algorithm 1's greedy strategy automatically improves the ranking of a higher-order interaction over its redundant subsets 2 (for redundancy, see Definition 1). This allows the higher-order interaction to have a better chance of ranking above any false positives and being captured in the cutoff stage. We justify this improvement by proving Theorem 4 under a mild assumption. Theorem 4 (Improving the ranking of higher-order interactions). Let R be the set of interactions proposed by Algorithm 1 with µ (·) = min (·), let I ∈ R be a d-way interaction where d ≥ 3, and let S be the set of subset (d − 1)-way interactions of I where |S| = d. Assume that for any hidden unit j which proposed s ∈ S ∩ R, I will also be proposed at the same hidden unit, and DISPLAYFORM2. Then, one of the following must be true: a) ∃s ∈ S ∩ R ranked lower than I, i.e., ω(I) > ω(s), or b) ∃s ∈ S where s / ∈ R.The full proof is included in Appendix D. Under the noted assumption, the theorem in part a) shows that a d-way interaction will improve over one its d − 1 subsets in rankings as long as there is no sudden drop from the weight of the (d − 1)-way to the d-way interaction at the same hidden units. We note that the improvement extends to b) as well, when d = |S ∩ R| > 1.Lastly, we note that Algorithm 1 assumes there are at least as many first-layer hidden units as there are the true number of non-redundant interactions. In practice, we use an arbitrarily large number of first-layer hidden units because true interactions are initially unknown. In order to predict the true top-K interactions DISPLAYFORM0, we must find a cutoff point on our interaction ranking from Section 4.2. We obtain this cutoff by constructing a Generalized Additive Model (GAM) with interactions: DISPLAYFORM1 x 1 x 2 + 2 x3+x5+x6 + 2 x3+x4+x5+x7 + sin(x 7 sin(x 8 + x 9)) + arccos(0.9x 10) DISPLAYFORM2 where g i (·) captures the main effects, g i (·) captures the interactions, and both g i and g i are small feedforward networks trained jointly via backpropagation. We refer to this model as MLP-Cutoff.We gradually add top-ranked interactions to the GAM, increasing K, until GAM performance on a validation set plateaus. The exact plateau point can be found by early stopping or other heuristic means, and we report DISPLAYFORM3 as the identified feature interactions. A variant to our interaction ranking algorithm tests for all pairwise interactions. Pairwise interaction detection has been a standard problem in the interaction detection literature BID25 BID9 due to its simplicity. Modeling pairwise interactions is also the de facto objective of many successful machine learning algorithms such as factorization machines BID31 and hierarchical lasso BID4.We rank all pairs of features {i, j} according to their interaction strengths ω({i, j}) calculated on the first hidden layer, where again the averaging function is min (·), and ω({i, j}) = p1 s=1 ω s ({i, j}). The higher the rank, the more likely the interaction exists. In this section, we discuss our experiments on both simulated and real-world datasets to study the performance of our approach on interaction detection. Averaging Function Our proposed NID framework relies on the selection of an averaging function (Sections 3.1, 4.2, and 4.4). We experimentally determined the averaging function by comparing representative functions from the generalized mean family BID5: maximum, root mean square, arithmetic mean, geometric mean, harmonic mean, and minimum, intuitions behind which were discussed in Section 3.1. To make the comparison, we used a test suite of 10 synthetic functions, which consist of a variety of interactions of varying order and overlap, as shown in TAB2. We trained 10 trials of MLP and MLP-M on each of the synthetic functions, obtained interaction rankings with our proposed greedy ranking algorithm (Algorithm 1), and counted the total number of correct interactions ranked before any false positive. In this evaluation, we ignore predicted interactions that are subsets of true higher-order interactions because the subset interactions are redundant (Section 2). As seen in FIG0, the number of true top interactions we recover is highest with the averaging function, minimum, which we will use in all of our experiments. A simple analytical study on a bivariate hidden unit also suggests that the minimum is closely correlated with interaction strength (Appendix B). TAB2 ) over 10 trials. x-axis labels are maximum, root mean square, arithmetic mean, geometric mean, harmonic mean, and minimum. Neural Network Configuration We trained feedforward networks of MLP and MLP-M architectures to obtain interaction rankings, and we trained MLP-Cutoff to find cutoffs on the rankings. In our experiments, all networks that model feature interactions consisted of four hidden layers with first-to-last layer sizes of: 140, 100, 60, and 20 units. In contrast, all individual univariate networks had three hidden layers with sizes of: 10, 10, and 10 units. All networks used ReLU activation and were trained using backpropagation. In the cases of MLP-M and MLP-Cutoff, summed networks were trained jointly. The objective functions were meansquared error for regression and cross-entropy for classification tasks. On the synthetic test suite, MLP and MLP-M were trained with L1 constants in the range of 5e-6 to 5e-4, based on parameter tuning on a validation set. On real-world datasets, L1 was fixed at 5e-5. MLP-Cutoff used a fixed L2 constant of 1e-4 in all experiments involving cutoff. Early stopping was used to prevent overfitting. Datasets We study our interaction detection framework on both simulated and real-world experiments. For simulated experiments, we used a test suite of synthetic functions, as shown in TAB2. The test functions were designed to have a mixture of pairwise and higher-order interactions, with varying order, strength, nonlinearity, and overlap. F 1 is a commonly used function in interaction detection literature BID19 BID39 BID25. All features were uniformly distributed between −1 and 1 except in F 1, where we used the same variable ranges as reported in literature BID19. In all synthetic experiments, we used random train/valid/test splits of 1/3 each on 30k data points. We use four real-world datasets, of which two are regression datasets, and the other two are binary classification datasets. The datasets are a mixture of common prediction tasks in the cal housing and bike sharing datasets, a scientific discovery task in the higgs boson dataset, and an example of very-high order interaction detection in the letter dataset. Specifically, the cal housing dataset is a regression dataset with 21k data points for predicting California housing prices BID29. The bike sharing dataset contains 17k data points of weather and seasonal information to predict the hourly count of rental bikes in a bikeshare system BID10. The higgs boson dataset has 800k data points for classifying whether a particle environment originates from the decay of a Higgs Boson BID0. Lastly, the letter recognition dataset contains 20k data points of transformed features for binary classification of letters on a pixel display BID12. For all real-world data, we used random train/valid/test splits of 80/10/10.Baselines We compare the performance of NID to that of three baseline interaction detection methods. Two-Way ANOVA utilizes linear models to conduct significance tests on the existence of interaction terms. Hierarchical lasso (HierLasso) BID4 applies lasso feature selection to extract pairwise interactions. RuleFit BID13 contains a statistic to measure pairwise interaction strength using partial dependence functions. Additive Groves (AG) BID39 ) is a nonparameteric means of testing for interactions by placing structural constraints on an additive model of regression trees. AG is a reference method for interaction detection because it directly detects interactions based on their non-additive definition. As discussed in Section 4, our framework NID can be used for pairwise interaction detection. To evaluate this approach, we used datasets generated by synthetic functions F 1 -F 10 (TAB2 that contain a mixture of pairwise and higher-order interactions, where in the case of higher-order interactions we tested for their pairwise subsets as in BID39 ; BID25 . AUC scores of interaction strength proposed by baseline methods and NID for both MLP and MLP-M are shown in TAB3 . We ran ten trials of AG and NID on each dataset and removed two trials with highest and lowest AUC scores. When comparing the AUCs of NID applied to MLP and MLP-M, we observe that the scores of MLP-M tend to be comparable or better, except the AUC for F 6 . On one hand, MLP-M performed better on F 2 and F 4 because these functions contain main effects that MLP would model as spurious interactions with other variables. On the other hand, MLP-M performed worse on F 6 because it modeled spurious main effects in the {8, 9, 10} interaction. Specifically, {8, 9, 10} can be approximated as independent parabolas for each variable (shown in Appendix I). In our analyses of NID, we mostly focus on MLP-M because handling main effects is widely considered an important problem in interaction detection BID4 BID24 BID22. Comparing the AUCs of AG and NID for MLP-M, the scores tend to close, except for F 5, F 6, and F 8, where NID performs significantly better than AG. This performance difference may be due to limitations on the model capacity of AG, which is tree-based. In comparison to ANOVA, HierLasso and RuleFit, NID-MLP-M generally performs on par or better. This is expected for ANOVA and HierLasso because they are based on quadratic models, which can have difficulty approximating the interaction nonlinearities present in the test suite. In Figure 4, heat maps of synthetic functions show the relative strengths of all possible pairwise interactions as interpreted from MLP-M, and ground truth is indicated by red cross-marks. The interaction strengths shown are normally high at the cross-marks. An exception is F 6, where NID proposes weak or negligible interaction strengths at the cross-marks corresponding to the {8, 9, 10} interaction, which is consistent with previous remarks about this interaction. Besides F 6, F 7 also shows erroneous interaction strengths; however, comparative detection performance by the baselines is similarly poor. Interaction strengths are also visualized on real-world datasets via heat maps (FIG1). For example, in the cal housing dataset, there is a high-strength interaction between x 1 and x 2. These variables mean longitude and latitude respectively, and it is clear to see that the outcome variable, California housing price, should indeed strongly depend on geographical location. We further observe high-strength interactions appearing in the heat maps of the bike sharing, higgs boson dataset, and letter datasets. For example, all feature pairs appear to be interacting in the letter dataset. The binary classification task from the letter dataset is to distinguish letters A-M from N-Z using 16 pixel display features. Since the decision boundary between A-M and N-Z is not obvious, it would make sense that a neural network learns a highly interacting function to make the distinction. We use our greedy interaction ranking algorithm (Algorithm 1) to perform higher-order interaction detection without an exponential search of interaction candidates. We first visualize our higherorder interaction detection algorithm on synthetic and real-world datasets, then we show how the predictive capability of detected interactions closes the performance gap between MLP-Cutoff and MLP-M. Next, we discuss our experiments comparing NID and AG with added noise, and lastly we verify that our algorithm obtains significant improvements in runtime.x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 TAB2 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 DISPLAYFORM0 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 DISPLAYFORM1 Figure 4: Heat maps of pairwise interaction strengths proposed by our NID framework on MLP-M for datasets generated by functions F 1 -F 10 (We visualize higher-order interaction detection on synthetic and real-world datasets in Figures 6 and 7 respectively. The plots correspond to higher-order interaction detection as the ranking cutoff is applied (Section 4.3). The interaction rankings generated by NID for MLP-M are shown on the x-axes, and the blue bars correspond to the validation performance of MLP-Cutoff as interactions are added. For example, the plot for cal housing shows that adding the first interaction significantly reduces RMSE. We keep adding interactions into the model until reaching a cutoff point. In our experiments, we use a cutoff heuristic where interactions are no longer added after MLP-Cutoff's validation performance reaches or surpasses MLP-M's validation performance (represented by horizontal dashed lines).As seen with the red cross-marks, our method finds true interactions in the synthetic data of F 1 -F 10 before the cutoff point. Challenges with detecting interactions are again mainly associated with F 6 and F 7, which have also been difficult for baselines in the pairwise detection setting TAB3. For the cal housing dataset, we obtain the top interaction {1, 2} just like in our pairwise test (FIG1, cal housing), where now the {1, 2} interaction contributes a significant improvement in MLP-Cutoff performance. Similarly, from the letter dataset we obtain a 16-way interaction, which is consistent with its highly interacting pairwise heat map (FIG1 . For the bike sharing and higgs boson datasets, we note that even when considering many interactions, MLP-Cutoff eventually reaches the cutoff point with a relatively small number of superset interactions. This is because many subset interactions become redundant when their corresponding supersets are found. In our evaluation of interaction detection on real-world data, we study detected interactions via their predictive performance. By comparing the test performance of MLP-Cutoff and MLP-M with respect to MLP-Cutoff without any interactions (MLP-Cutoff Ø), we can compute the relative test performance improvement obtained by including detected interactions. These relative performance improvements are shown in TAB7 for the real-world datasets as well as four selected synthetic datasets, where performance is averaged over ten trials per dataset. The of this study show DISPLAYFORM2 Figure 6: MLP-Cutoff error with added top-ranked interactions (along x-axis) of F 1 -F 10 TAB2, where the interaction rankings were generated by the NID framework applied to MLP-M. Red crossmarks indicate ground truth interactions, and Ø denotes MLP-Cutoff without any interactions. Subset interactions become redundant when their true superset interactions are found. TAB2, where the interaction rankings were generated by the NID framework on MLP-M. Ø denotes MLP-Cutoff without any interactions.that a relatively small number of interactions of variable order are highly predictive of their corresponding datasets, as true interactions should. We further study higher-order interaction detection of our NID framework by comparing it to AG in both interaction ranking quality and runtime. To assess ranking quality, we design a metric, toprank recall, which computes a recall of proposed interaction rankings by only considering those interactions that are correctly ranked before any false positive. The number of top correctly-ranked interactions is then divided by the true number of interactions. Because subset interactions are redundant in the presence of corresponding superset interactions, only such superset interactions can count as true interactions, and our metric ignores any subset interactions in the ranking. We compute the top-rank recall of NID on MLP and MLP-M, the scores of which are averaged across all tests in the test suite of synthetic functions TAB2 with 10 trials per test function. For each test, we remove two trials with max and min recall. We conduct the same tests using the stateof-the-art interaction detection method AG, except with only one trial per test because AG is very computationally expensive to run. In FIG4, we show top-rank recall of NID and AG at different Gaussian noise levels 3, and in FIG4, we show runtime comparisons on real-world and synthetic datasets. As shown, NID can obtain similar top-rank recall as AG while running orders of magnitude times faster. In higher-order interaction detection, our NID framework can have difficulty detecting interactions from functions with interlinked interacting variables. For example, a clique x 1 x 2 +x 1 x 3 +x 2 x 3 only TAB2, (b) comparison of runtimes, where NID runtime with and without cutoff are both measured. NID detects interactions with top-rank recall close to the state-of-the-art AG while running orders of magnitude times faster.contains pairwise interactions. When detecting pairwise interactions (Section 5.2), NID often obtains an AUC of 1. However, in higher-order interaction detection, the interlinked pairwise interactions are often confused for single higher-order interactions. This issue could mean that our higherorder interaction detection algorithm fails to separate interlinked pairwise interactions encoded in a neural network, or the network approximates interlinked low-order interactions as higher-order interactions. Another limitation of our framework is that it sometimes detects spurious interactions or misses interactions as a of correlations between features; however, correlations are known to cause such problems for any interaction detection method BID39 BID25. We presented our NID framework, which detects statistical interactions by interpreting the learned weights of a feedforward neural network. The framework has the practical utility of accurately detecting general types of interactions without searching an exponential solution space of interaction candidates. Our core insight was that interactions between features must be modeled at common hidden units, and our framework decoded the weights according to this insight. In future work, we plan to detect feature interactions by accounting for common units in intermediate hidden layers of feedforward networks. We would also like to use the perspective of interaction detection to interpret weights in other deep neural architectures. A PROOF AND DISCUSSION FOR PROPOSITION 2Given a trained feedforward neural network as defined in Section 2.3, we can construct a directed acyclic graph G = (V, E) based on non-zero weights as follows. We create a vertex for each input feature and hidden unit in the neural network: V = {v,i |∀i,}, where v,i is the vertex corresponding to the i-th hidden unit in the -th layer. Note that the final output y is not included. We create edges based on the non-zero entries in the weight matrices, i.e., DISPLAYFORM0 Note that under the graph representation, the value of any hidden unit is a function of parent hidden units. In the following proposition, we will use vertices and hidden units interchangeably. Proposition 2 (Interactions at Common Hidden Units). Consider a feedforward neural network with input feature DISPLAYFORM1, there exists a vertex v I in the associated directed graph such that I is a subset of the ancestors of v I at the input layer (i.e., = 0).Proof. We prove Proposition 2 by contradiction. Let I be an interaction where there is no vertex in the associated graph which satisfies the condition. Then, for any vertex v L,i at the L-th layer, the value f i of the corresponding hidden unit is a function of its ancestors at the input layer I i where I ⊂ I i.Next, we group the hidden units at the L-th layer into non-overlapping subsets by the first missing feature with respect to the interaction I. That is, for element i in I, we create an index set S i ∈ [p L]: DISPLAYFORM2 Note that the final output of the network is a weighed summation over the hidden units at the L-th layer: DISPLAYFORM3 Since that j∈Si w y j f j x Ij is not a function of x i, we have that ϕ (·) is a function without the interaction I, which contradicts our assumption. The reverse of this statement, that a common descendant will create an interaction among input features, holds true in most cases. The existence of counterexamples is manifested when early hidden layers capture an interaction that is negated in later layers. For example, the effects of two interactions may be directly removed in the next layer, as in the case of the following expression: max{w 1 x 1 + w 2 x 2, 0} − max{−w 1 x 1 − w 2 x 2, 0} = w 1 x 1 + w 2 x 2. Such an counterexample is legitimate; however, due to random fluctuations, it is highly unlikely in practice that the w 1 s and the w 2 s from the left hand side are exactly equal. We can provide a finer interaction strength analysis on a bivariate ReLU function: max{α 1 x 1 + α 2 x 2, 0}, where x 1, x 2 are two variables and α 1, α 2 are the weights for this simple network. We quantify the strength of the interaction between x 1 and x 2 with the cross-term coefficient of the best quadratic approximation. That is, β 0,..., β 5 = argmin βi,i=0,...,5 DISPLAYFORM0 Then for the coefficient of interaction {x 1, x 2}, β 5, we have that, DISPLAYFORM1 Note that the choice of the region (−1, 1) × (−1, 1) is arbitrary: for larger region (−c, c) × (−c, c) with c > 1, we found that |β 5 | scales with c −1. By the of Proposition B, the strength of the interaction can be well-modeled by the minimum value between |α 1 | and |α 2 |. Note that the factor before min{|α 1 |, |α 2 |} in Equation FORMULA13 Proof. For non-differentiable φ (·) such as the ReLU function, we can replace it with a series of differentiable 1-Lipschitz functions that converges to φ (·) in the limit. Therefore, without loss of generality, we assume that φ (·) is differentiable with |∂ x φ(x)| ≤ 1. We can take the partial derivative of the final output with respect to h i, the i-th unit at the -th hidden layer: DISPLAYFORM2 We can conclude the Lemma by proving the following inequality: DISPLAYFORM3 The left-hand side can be re-written as DISPLAYFORM4 The right-hand side can be re-written as DISPLAYFORM5 We can conclude by noting that |∂ x φ(x)| ≤ 1. Theorem 4 (Improving the ranking of higher-order interactions). Let R be the set of interactions proposed by Algorithm 1 with µ (·) = min (·), let I ∈ R be a d-way interaction where d ≥ 3, and let S be the set of subset (d − 1)-way interactions of I where |S| = d. Assume that for any hidden unit j which proposed s ∈ S ∩ R, I will also be proposed at the same hidden unit, and DISPLAYFORM0. Then, one of the following must be true: a) ∃s ∈ S ∩ R ranked lower than I, i.e., ω(I) > ω(s), or b) ∃s ∈ S where s / ∈ R.Proof. Suppose for the purpose of contradiction that S ⊆ R and ∀s ∈ S, ω(s) ≥ ω(I). Because DISPLAYFORM1 which is a contradiction. E ROC CURVES We evaluate our approach in a large p setting with pairwise interactions using the same synthetic function as in BID30. Specifically, we generate a dataset of n samples and p features {X (i), y (i) } using the function DISPLAYFORM2 where X (i) ∈ R p is the i th instance of the design matrix X ∈ R p×n, y (i) ∈ R is the i th instance of the response variable y ∈ R n×1, W ∈ R p×p contains the weights of pairwise interactions, β ∈ R p contains the weights of main effects, (i) is noise, and i = 1,..., n. W was generated as a sum of K rank one matrices, W = K k=1 a k a k. In this experiment, we set p = 1000, n = 1e4, and K = 5. X is normally distributed with mean 0 and variance 1, and (i) is normally distributed with mean 0 and variance 0.1. Both a k and β are sparse vectors of 2% nonzero density and are normally distributed with mean 0 and variance 1. We train MLP-M with the same hyperparameters as before (Section 5.1) but with a larger main network architecture of five hidden layers, with first-to-last layers sizes of 500, 400, 300, 200, and 100. Interactions are then extracted using the NID framework. From this experiment, we obtain a pairwise interaction strength AUC of 0.984 on 950 ground truth pairwise interactions, where the AUC is measured in the same way as those in TAB3. The corresponding ROC curve is shown in Figure 10. We compare the average performance of NID for different regularizations on MLP-M networks. Specifically, we compare interaction detection performance when an MLP-M network has L1, L2, or group lasso regularization 4. While L1 and L2 are common methods for regularizing neural network weights, group lasso is used to specifically regularize groups in the input weight matrix because weight connections into the first hidden layer are especially important in this work. In particular, we study group lasso by 1) forming groups associated with input features, and 2) forming groups associated with hidden units in the input weight matrix. In this experimental setting, group lasso effectively conducts variable selection for associated groups. BID35 define group lasso regularization for neural networks in Equation 5. Denote group lasso with input groups a R (i) GL and group lasso with hidden unit groups as R (h) GL. In order to apply both group and individual level sparsity, BID35 further define sparse group lasso in Equation 7. Denote sparse group lasso with input groups as R (i) SGL and sparse group lasso with hidden unit groups as R (h) Networks that had group lasso or sparse group lasso applied to the input weight matrix had L1 regularization applied to all other weights. In our experiments, we use large dataset sizes of 1e5 and tune the regularizers by gradually increasing their respective strengths from zero until validation performance worsens from underfitting. The rest of the neural network hyperparameters are the same as those discussed in Section 5.1. In the case of the group lasso and sparse group lasso experiments, L1 norms were tuned the same as in the standard L1 experiments. In Table 4, we report average pairwise interaction strength AUC over 10 trials of each function in our synthetic test suite TAB2 for the different regularizers. Table 4: Average AUC of pairwise interaction strengths proposed by NID for different regularizers. Evaluation was conducted on the test suite of synthetic functions TAB2. DISPLAYFORM0 average 0.94 ± 2.9e−2 0.94 ± 2.5e−2 0.95 ± 2.4e−2 0.94 ± 2.5e−2 0.93 ± 3.2e−2 0.94 ± 3.0e−2 We perform experiments with our NID approach on synthetic datasets that have binary class labels as opposed to continuous outcome variables (e.g. TAB2). In our evaluation, we compare our method against two logistic regression methods for multiplicative interaction detection, Factorization Based High-Order Interaction Model (FHIM) BID30 and Sparse High-Order Logistic Regression (Shooter). In both comparisons, we use dataset sizes of p = 200 features and n = 1e4 samples based on MLP-M's fit on the data and the performance of the baselines. We also make the following modifications to MLP-M hyperparameters based on validation performance: the main MLP-M network has first-to-last layer sizes of 100, 60, 20 hidden units, the univariate networks do not have any hidden layers, and the L1 regularization constant is set to 5e−4. All other hyperparameters are kept the same as in Section 5.1. When used in a logistic regression model, FHIM detects pairwise interactions that are predictive of binary class labels. For this comparison, we used data generated by Equation 5 in BID30, with K = 2 and sparsity factors being 5% to generate 73 ground truth pairwise interactions. In TAB8, we report average pairwise interaction detection AUC over 10 trials, with a maximum and a minimum AUC removed. FHIM NID average 0.925 ± 2.3e−3 0.973 ± 6.1e−3Shooter Min et al. FORMULA13 developed Shooter, an approach of using a tree-structured feature expansion to identify pairwise and higher-order multiplicative interactions in a L1 regularized logistic regression model. This approach is special because it relaxes our hierarchical hereditary assumption, which requires subset interactions to exist when their corresponding higher-order interaction also exists (Section 3). Specifically, Shooter relaxes this assumption by only requiring at least one (d − 1)-way interaction to exist when its corresponding d-way interaction exists. With this relaxed assumption, Shooter can be evaluated in depth per level of interaction order. We compare NID and Shooter under the relaxed assumption by also evaluating NID per level of interaction order, where Algorithm 1 is specifically being evaluated. We note that our method of finding a cutoff on interaction rankings (Section 4.3) strictly depends on the hierarchical hereditary assumption both within the same interaction order and across orders, so instead we set cutoffs by thresholding the interaction strengths by a low value, 1e−3.For this comparison, we generate and consider interaction orders up to degree 5 (5-way interaction) using the procedure discussed in, where the interactions do not have strict hierarchical correspondence. We do not extend beyond degree 5 because MLP-M's validation performance begins to degrade quickly on the generated dataset. The sparsity factor was set to be 5%, and to simplify the comparison, we did not add noise to the data. In TAB9, we report precision and recall scores of Shooter and NID, where the scores for NID are averaged over 10 trials. While Shooter performs near perfectly, NID obtains fair precision scores but generally low recall. When we observe the interactions identified by NID per level of interaction order, we find that the interactions across levels are always subsets or supersets of another predicted interaction. This strict hierarchical correspondence would inevitably cause NID to miss many true interactions under this experimental setting. The limitation of Shooter is that it must assume the form of interactions, which in this case is multiplicative. 87.5% 100% 90% ± 11% 21% ± 9.6% 3 96.0% 96.0% 91% ± 8.4% 19% ± 8.5% 4 100% 100% 60% ± 13% 21% ± 9.3% 5 100% 100% 73% ± 8.4% 30% ± 13% In the synthetic function F 6 TAB3, the {8, 9, 10} interaction, x 2 8 + x 2 9 + x 2 10, can be approximated as main effects for each variable x 8, x 9, and x 10 when at least one of the three variables is close to −1 or 1. Note that in our experiments, these variables were uniformly distributed between −1 and 1., and x 10. The MLP-M was trained on data generated from synthetic function F 6 (TAB3). Note that the plots are subject to different levels of bias from the MLP-M's main multivariate network. For example, let x 10 = 1 and z 2 = x 2 8 + x By symmetry under the assumed conditions, where c is a constant. In FIG10, we visualize the x 8, x 9, x 10 univariate networks of a MLP-M (Figure 2) that is trained on F 6. The plots confirm our hypothesis that the MLP-M models the {8,9,10} interaction as spurious main effects with parabolas scaled by In FIG13, we visualize the interaction between longitude and latitude for predicting relative housing price in California. This visualization is extracted from the longitude-latitude interaction network within MLP-Cutoff 5, which was trained on the cal housing dataset BID29. We can see that this visualization cannot be generated by longitude and latitude information in additive form, but rather the visualization needs special joint information from both features. The highly interacting nature between longitude and latitude confirms the high rank of this interaction in our NID experiments (see the {1, 2} interaction for cal housing in FIG1. | We detect statistical interactions captured by a feedforward multilayer neural network by directly interpreting its learned weights. | 645 | scitldr |
The neural linear model is a simple adaptive Bayesian linear regression method that has recently been used in a number of problems ranging from Bayesian optimization to reinforcement learning. Despite its apparent successes in these settings, to the best of our knowledge there has been no systematic exploration of its capabilities on simple regression tasks. In this work we characterize these on the UCI datasets, a popular benchmark for Bayesian regression models, as well as on the recently introduced''gap'' datasets, which are better tests of out-of-distribution uncertainty. We demonstrate that the neural linear model is a simple method that shows competitive performance on these tasks. Despite the recent successes that neural networks have shown in an impressive range of tasks, they tend to be overconfident in their predictions . Bayesian neural networks (BNNs;) attempt to address this by providing a principled framework for uncertainty estimation in predictions. However, inference in BNNs is intractable to compute, requiring approximate inference techniques. Of these, Monte Carlo methods and variational methods, including Monte Carlo dropout (MCD) , are popular; however, the former are difficult to tune, and the latter are often limited in their expressiveness (b; ; a). The neural linear model represents a compromise between tractability and expressiveness for BNNs in regression settings: instead of attempting to perform approximate inference over the entire set of weights, it performs exact inference on only the last layer, where prediction can be done in closed form. It has recently been used in active learning , Bayesian optimization , reinforcement learning , and AutoML , among others; however, to the best of our knowledge, there has been no systematic attempt to benchmark the model in the simple regression setting. In this work we do so, first demonstrating the model on a toy example, followed by experiments on the popular UCI datasets (as in Hernández-) and the recent UCI gap datasets from Foong et al. (2019b), who identified (along with) well-calibrated'in-between' uncertainty as a desirable feature of BNNs. In this section, we briefly describe the different models we train in this work, which are variations of the neural linear (NL) model, in which a neural network extracts features from the input to be used as basis functions for Bayesian linear regression. The central issue in the neural linear model is how to train the network: in this work, we provide three different models, with a total of four different training methods. For a more complete mathematical description of the models, refer to Appendix A; we summarize the models in Appendix C. , we can first train the neural network using maximum a posteriori (MAP) estimation. After this training phase, the outputs of the last hidden layer of the network are used as the features for Bayesian linear regression. To reduce overfitting, the noise variance and prior variance (for the Bayesian linear regression) are subsequently marginalized out by slice sampling according to the tractable marginal likelihood, using uniform priors. We refer to this model as the maximum a posteriori neural linear model (which we abbreviate as MAP-L NL, where L is the number of hidden layers in the network). We tune the hyperparameters for the MAP estimation via Bayesian optimization . Regularized NL The MAP NL model's basis functions are learned independently of the final model's predictions. This is an issue for uncertainty quantification, as MAP training has no incentive to learn features useful for providing uncertainty in out-of-distribution areas. To address this issue, we propose to learn the features by optimizing the (tractable) marginal likelihood with respect to the network weights (previous to the output layer), treating them as hyperparameters of the model in an approach analogous to hyperparameter optimization in Gaussian process (GP) regression . However, unlike in GP regression, the per-iteration computational cost of this method is linear in the size of the data. We additionally regularize the weights to reduce overfitting, ing in a model we call regularized neural linear (which we abbreviate as Reg-L NL). As in the MAP NL model, we marginalize out the noise and prior variances via slice sampling. We tune the regularization and other hyperparameters via Bayesian optimization. Bayesian noise NL Instead of using slice sampling for the noise variance, we can place a normal-inverse-gamma (N-Γ −1) prior on the weights and noise variance. This formulation is still tractable, and integrates the marginalization of the noise variance into the model itself, rather than having it implemented after the features are learned. Additionally, the N-Γ −1 prior can act as a regularizer, meaning that we can avoid using Bayesian optimization to tune the prior parameters by jointly optimizing the marginal likelihood over all hyperparameters. However, this risks overfitting. Therefore, we consider training this model, which we call the Bayesian noise (BN) neural linear model, both by maximizing the marginal likelihood for all parameters (including prior parameters), and by tuning the prior parameters with Bayesian optimization. We abbreviate the first as BN(ML)-L NL and the second as BN(BO)-L NL. Finally, in both cases we slice sample the remaining (non-weight) hyperparameters. We compare these models on a toy problem, the UCI datasets, and the UCI "gap" datasets (b). In all experiments, we consider 1-and 2-layer ReLU fullyconnected networks with 50 hidden units in each layer (except for the toy problem, where we only consider 2-layer networks). We also provide for simple MAP inference as a baseline. For experimental details, refer to Appendix B. We provide additional experimental , including detailed statistical comparisons of the models, in Appendix D. Toy problem We construct a synthetic 1-D dataset comprising 100 train and 100 test pairs (x, y), where x is sampled i.i.d. in the range [−4, −2] ∪ and y is generated as y = x 3 +, ∼ N. This follows the example from Hernández-, with the exception of the "gap" added in the range for x, which was motivated by Foong et al. (2019b) and. We plot predictive distributions for each model in Figure 1. Somewhat surprisingly, the MAP-2 NL model seems to struggle more than MAP with uncertainty in the gap, while having better uncertainty quantification at the edges. Of the marginal likelihood-based methods, the BN(BO)-2 NL model qualitatively seems to perform the best. UCI datasets We next provide on the UCI datasets in Hernández- (omitting the 'year' dataset due to its size), a popular benchmark for Bayesian regression models in recent years. We report average test log likelihoods and RMSEs for all the models in Appendix D.1, for both 1-and 2-layer architectures. We visualize average test log likelihoods for the models in Figure 2; we tabulate the log likelihoods and RMSEs in Tables 2 and 3 in Appendix D.1, respectively. From the figure and tables, we see that the BN(ML)-2 NL and BN(BO)-2 NL models have the best performance on these metrics, with reasonable log likelihoods and RMSEs compared to those in the literature for other BNN-based methods (Hernández-; ; ; Tomczak et al.). In fact, these neural linear methods tend to achieve state-of-the-art or near state-of-the-art neural network performance on the'energy' and'naval' datasets. While the performance of the Reg-L NL model is decent, it performs worse than the BN-L NL models, showing the advantage of a Bayesian treatment of the noise variance. UCI gap datasets Finally, we provide on the UCI "gap" datasets proposed by Foong et al. (2019b), which consists of training and testing splits that artificially contain gaps in the training set, ensuring that the model will only succeed if it can represent uncertainty in-between gaps in the data. We again visualize test log likelihoods in Figure 3 while tabulating log likelihoods and RMSEs in Tables 6 and 7 in Appendix D.1. Our on the MAP-based models in Figure 3 echo those of Foong et al. (2019b), showing catastrophic failure to express in-between uncertainty for some datasets (particularly 'energy' and 'naval'). Somewhat surprisingly, the Reg-L NL models perform the worst of all the models. However, the BN NL models do not seem to fail catastrophically, with the BN(BO)-2 NL model having by far the best performance. In all of these models we used some form of hyperparameter tuning (Bayesian optimization for all models except the BN(ML)-L NL models, where we used a grid search) to obtain the shown. However, for the practitioner, performing an oftentimes costly hyperparameter search is not desirable, particularly where one of the main motivations for using the model is its simplicity, as in this case. We therefore investigate the effect of the hyperparameter tuning on the models' performance. Figure 4 shows the difference in average test log likelihoods and test RMSEs between the tuned models and models whose hyperparameters were set to "reasonable" values that a practitioner might choose by intuition (see Appendix D.2 for details) for the UCI datasets. We observe that for each of the two-layer models there exists at least one dataset where the performance in terms of test log likelihood is significantly worsened by omitting hyperparameter tuning. The performance difference for RMSEs is not as drastic, although it still exists. In Appendix D.2 we show that these extend to the UCI gap datasets and that the difference in performance is statistically significant for nearly all models across both the UCI and UCI gap datasets, for both log likelihood and RMSE performance. Finally, in Appendix D.2.1 we show that mean field variational inference (MFVI) (; ;) and MCD can still obtain reasonable, although not state-of-the-art, performance on the UCI datasets without hyperparameter tuning: in many cases the performance is even competitive with the tuned NL models. However, these suffer from the pathologies identified in Foong et al. (2019b);; Foong et al. (2019a) on the gap datasets. We have shown benchmark for different variants of the neural linear model in the regression setting. Our show that the successes these models have seen in other areas such as reinforcement and active learning are not unmerited, with the models achieving generally good performance despite their simplicity. Furthermore, they are not as susceptible to the the inability to express gap uncertainty as MFVI or MCD. However, we have shown that to obtain reasonable performance extensive hyperparameter tuning is often required, unlike MFVI or MCD. Finally, our work suggests that exact inference on a subset of parameters can perform better than approximate inference on the entire set, at least for BNNs. We believe this broader issue is worthy of further investigation. The neural linear model uses a neural network to parameterize basis functions for Bayesian linear regression by treating the output weights and bias of the network probabilistically, while treating the rest of the network's parameters θ as hyperparameters. This can be used as an approximation to full Bayesian inference of the neural network's parameters, with the main advantage being that this simplified case is tractable (assuming Gaussian prior and likelihood). Given the fact that there are significant redundancies in the weight-space posterior for BNNs, this tradeoff may not be a completely unreasonable approximation. We now describe the model mathematically., where (x n, y n) ∈ R d × R, be the training data, and let T represent the outputs (post-activations) of the last hidden layer of the neural network, which will be parameterized by all the weights and biases up to the last layer, θ. We then define a weight vector w ∈ R M = R N L +1 (this includes a bias term, augmenting φ θ (x) with a 1). If we define a design matrix Φ θ = [φ θ (x 1),..., φ θ (x N)] T, we can then define our model as where we treat Y as a column vector of the y n. Given an appropriate θ, Bayesian inference of the weights w is straightforward: given a prior p(w) = N (w; 0, αI M) on the weights, the posterior is given by The posterior predictive for a test input x * is then given by It now remains to be determined how to learn θ. As described in , we can learn θ by simply setting it to the values of the corresponding weights and biases in a maximum a posteriori (MAP)-trained network, maximizing the objective with respect to θ F ull and σ 2, where θ F ull represents the parameters of the full network (which includes the output weights and bias), and γ is a regularization parameter. As in , once we have obtained θ from θ F ull, we use can use Bayesian linear regression as outlined above. However, the question of setting α still remains. To address this, we marginalize α and σ 2 out by slice sampling them according to the log marginal likelihood of the data: In order to learn a suitable value of γ, along with learning rates and number of epochs, we use Bayesian optimization. For a complete description of the experimental details, see Appendix B. One key disadvantage of this approach is that it separates the feature learning from prediction: in particular, there is no reason for the network to learn features relevant for out-of-distribution prediction, particularly when it comes to uncertainty estimates. From a Bayesian perspective, the neural linear model can be interpreted as a Gaussian process model with a covariance kernel determined by a finite number of basis functions φ θ,i with hyperparameters θ. Therefore, as in Gaussian process regression, we propose to maximize the log marginal likelihood of the data, L θ,α,σ 2 (D), with respect to θ and σ as the hyperparameters of the model for an empirical Bayes approach. Note that the computational complexity of this expression is O(N + M 3), as opposed to the O(N 3) cost typically seen in GP regression. This is because we are able to apply the Woodbury identity to obtain the determinant in terms of V N, which is M × M, due to the fact that there is a finite number of basis functions. Since we typically have that N M, this in significant computational savings. One issue with this Type-2 maximum likelihood approach is that it will tend to overfit to the training data due to the large number of hyperparameters θ. As a , the noise variance σ 2 will tend to be pushed towards zero. One way of addressing this is by introducing a regularization scheme. There are many potential regularization schemes that could be introduced: we could regularize θ, α, or σ individually, or using any combination of the three. We found empirically that of these, simply regularizing θ alone via L 2 regularization seemed the most promising approach. This in a Type-2 MAP approach wherein we maximizeL where we have divided θ into weights θ W and biases θ b and introduced regularization hyperparameters γ W and γ b. An alternative to regularization would be to treat the noise variance in a Bayesian manner by integrating it out. Fortunately, for Bayesian linear regression this is still tractable with the use of a normal-inverse-gamma prior on the outputs weights and parameters The posterior has the form with posterior predictive where T (· ; µ, Σ, ν) is a Student's t-distribution with mean µ, scale Σ, and degrees of freedom ν. As before, we train the network using empirical Bayes, where the marginal likelihood is given by Note that by using the Woodbury identity it is possible to compute this in O(N + M 3) computational cost as before. All neural networks tested were ReLU networks with one or two 50-unit hidden layers. When using a validation set, we set its size to be one fifth of the size of the training set, except for the toy example, where we used half the training set. We now describe the experimental setup for each model we used. MAP For the MAP baseline, we select a batch size of 32. We subsequently use Bayesian optimization (see section B.1 for a description of the Bayesian optimization algorithm we use) to optimize four hyperparameters using validation log likelihood: the regularization parameter γ, a learning rate for the weights, a learning rate for the noise variance, and the number of epochs. The regularization parameter is allowed to vary within the range corresponding to a log prior variance between -5 and 5. The learning rates are also optimized in log space in the range [log 1e-4, log 1e − 2]. Finally, the number of epochs is set to vary between zero and the number required to obtain at least 10000 gradient steps (the number of epochs will thus vary with the size of the dataset given a constant batch size). We initialize the regularization parameter to 0.5, the learning rates at 1e-3, the noise variance at e −3, and the number of epochs at the maximum value. The network itself is optimized using ADAM . MAP NL For the MAP neural linear model, we take the above optimal MAP network and obtain 200 slice samples of α W (the output weight prior variance), α b (the output bias prior variance), and σ 2 for Bayesian linear regression. We initialize α W = 1/50 and α b = 1, to match the scaling used in. Regularized NL For the regularized NL model, there are five hyperparameters which we tune via Bayesian optimization: γ W, γ b, a learning rate for θ, a learning rate for σ 2, and the number of epochs. We allow γ W and γ b to vary within a range of log prior variances between -10 and 10, and the number of epochs to be in the range of (since each epoch corresponds to one gradient step). The ranges for the other parameters remain the same. We initialize γ W and γ b to 1, and the remaining parameters the same way as in the MAP model. We again initialize α W = 1/50 and α b = 1. As before, we use 200 slice samples to marginalize out σ 2, α W, and α b after the Bayesian optimization was completed. Bayesian noise NL (ML) Here we optimize the parameters θ, a 0, b 0, α W, and α b directly and jointly via the log marginal likelihood. We employ early stopping by tracking the validation log likelihood up to 5000 epochs, and also maximize the validation log likelihood over a grid of 10 learning rates ranging logarithmically from log 1e-4 to log 1e-2. We also initialize a 0 = b 0 = 1 and α W = α b = 1. Finally, we use slice sampling to obtain 200 samples to marginalize out these hyperparameters. Bayesian noise NL (BO) Instead of optimizing over the hyperparameters jointly as in the BN(ML) model, we keep all except θ fixed over each iteration of Bayesian optimization. We retain the same initializations, and allow the following ranges for the hyperparameters:, 10], with the ranges for the learning rate and number of epochs being the same as before. We retain the same initializations as before as well. The slice sampling also remains the same. Here we describe the Bayesian optimization algorithm that we used throughout. In each case we attempt to maximize the validation log likelihood. We largely follow the formulation set out in. We use a Gaussian process with a Matérn-5/2 kernel with the model hyperparameters as inputs and the validation log likelihoods as outputs (normalizing the inputs and outputs). We first learn the kernel hyperparameters (including a noise variance) by maximizing the marginal likelihood of the GP, using 5000 iterations of ADAM with a learning rate of 1e-2. We then obtain 20 slice samples of the GP hyperparameters, before using the expected improvement acquisition function to find the next set of network hyperparameters to test. In total, we use 50 iterations of Bayesian optimization for each model, initialized with 10 iterations of random search. In Table 1, we provide a summary of the models we use, describing which parameters are optimized and how (we exclude learning rates and the number of epochs from this Table 1: Summary of the models presented. The first column lists the model; the second shows the optimization objective, while the third shows which parameters were optimized using this objective. Meanwhile, the fourth lists the parameters that were tuned using Bayesian optimization, while the final lists the parameters that slice sampling was performed on. In this appendix, we provide the full from the main text, before briefly describing empirically the effect of slice sampling on the models. On the next pages, we present tables of average test log likelihoods and test RMSEs for the UCI and UCI gap datasets for all models. For the UCI datasets, we present the average test log likelihoods and test RMSEs in Tables 2 and 3, as well as train log likelihoods and RMSEs in Tables 4 and 5 . , we also compute average ranks of the models across all splits of the standard UCI datasets. As in , we additionally follow the procedure for the Friedman test as described in Demšar, generating the plots shown in Figures 5 and 6. These plots show the average rank of each method across all splits, where the difference between models is not statistically significant (p < 0.05) if the models are connected by a dark line, which is determined by the critical difference (CD). We make a few observations from these . First, the two-layer marginal likelihoodbased methods generally outperform the other methods, with the BN(BO)-2 NL model performing the best of all (although not significantly different from the BN(ML)-2 NL model according to the Friedman test). These are generally followed by the single-layer marginal-likelihood based methods, with the MAP-based methods performing the worst of all. This confirms our intuition that the more Bayesian versions of the models would yield better performance. From the train log likelihoods and RMSEs, we observe that all the models exhibit overfitting for most of the datasets, with especially noticeable overfitting on'boston','concrete','energy', and'yacht'. The overfitting is generally worse on the two-layer models than the single-layer models, as there are more hyperparamters θ that can lead to overfitting in these models. Based off this trend, we expect that as the number of layers is increased further that the overfitting would worsen, thereby potentially limiting the use of neural linear models to smaller, shallower neural networks. In Tables 6 and 7 we show the test log likelihoods for the UCI gap datasets. As we are more concerned about whether the models can capture in-between uncertainty than the performance of the models on these datasets, we do not compute average ranks for these datasets. Additionally, since the test set is not within the same distribution as the training set, we do not show on the training sets as they cannot be compared to the test performance. From these tables, we see that the MAP-L, MAP-L NL, and Reg-L NL models fail catastrophically on the'naval' and'energy' dataset. Additionally, MAP-1 performs especially poorly on'yacht', although it is not clear whether this can be termed'catastrophic'. While the BN NL models perform poorly on'naval' and'energy', by looking at the log likelihoods on the individual splits themselves we found that they were not actually failing catastrophically. This yet again confirms that the more fully Bayesian models are better, although it is surprising just how poorly the Reg-L NL models perform. Additionally, because the overfitting we observed before worsens as the number of layers is increased, the performance of the BN NL models worsens as more layers are added. We describe the setup for our experiments on the effect of hyperparameter tuning as well as provide additional not in the main text. We first describe the "reasonable" hyperparameter values that we selected: MAP For the MAP baseline, we select a batch size of 32. We set γ = 0.5, corresponding to a unity prior variance. We set the two learning rates to the ADAM default of 1e-3 . Finally, we allow for approximately 10000 gradient steps (we ensure that the last epoch is completed, so that there are at least 10000 gradient steps). MAP NL For the MAP neural linear model, we take the above optimal MAP network and obtain 200 slice samples of α W (the output weight prior variance), α b (the output bias prior variance), and σ 2 for Bayesian linear regression. We initialize α W = 1/50 and α b = 1, to match the scaling used in. Regularized NL We set γ W = γ b = 0.5, the learning rates to 1e-3 and the number of epochs to 5000. We initialize a 0 = b 0 = α W = α b = 1, the learning rate to 1e-3, and the number of epochs to 5000. We use the same hyperparameter settings as above, although in this case a 0, b 0, α W, and α b will remain fixed. The log likelihoods and RMSEs for each split are then compared to those obtained when hyperparameter tuning is allowed. We show the average test log likelihoods and test RMSEs for the models without hyperparameter tuning in Tables 9 and 10 for the UCI datasets and Tables 11 and 12 for the UCI gap datasets. We also visualize the for the gap datasets in Figure 7. These show that in general the hyperparameter tuning is of essential importance, particularly for the two-layer cases. As a whole, the are significantly worse than with hyperparameter tuning, and in particular, for each of the two-layer methods, there is at least one dataset where the are catastrophically bad compared to the models with hyperparameter tuning. Somewhat by contrast, however, while the for the gap datasets are worse for the methods that did not fail catastrophically, they are not catastrophically worse. For the methods that were not able to represent in-between uncertainty, however, we find that they now fail catastrophically on even more datasets. We now verify that the differences induced by the hyperparameter tuning are indeed statistically significant. In order to do so, we use the Wilcoxon signed-rank test as described in Demšar. By comparing each tuned model to its non-tuned counterpart over all splits, we arrive at the table shown in Table 8. This shows that the difference is indeed statistically significant (p < 0.05) on both the standard and gap datasets for the vast majority of models, measured both by log likelihoods and RMSEs. Model boston concrete energy kin8nm naval power protein wine yacht MFVI-1 -2.60 ± 0.06 -3.09 ± 0.03 -0.74 ± 0.02 1.11 ± 0.01 5.91 ± 0.04 -2.82 ± 0.01 -2.94 ± 0.00 -0.97 ± 0.01 -1.25 ± 0.13 MFVI-2 -2.82 ± 0.04 -3.10 ± 0.02 -0.77 ± 0.02 1.24 ± 0.01 5.99 ± 0.08 -2.81 ± 0.01 -2.87 ± 0.00 -0.98 ± 0.01 -1.15 ± 0.05 MCD-1 -2.71 ± 0.11 -3.33 ± 0.02 -1.89 ± 0.03 0.67 ± 0.01 3.31 ± 0.01 -2.98 ± 0.01 -3.01 ± 0.00 -0.97 ± 0.02 -2.48 ± 0.10 MCD-2 -2.70 ± 0.12 -3.17 ± 0.03 -1.34 ± 0.02 0.74 ± 0.01 3.91 ± 0.02 -2.92 ± 0.01 -2.95 ± 0.00 -1.16 ± 0.04 -2.88 ± 0.22 To ensure that this worse behavior is not because all models require hyperparameter tuning to perform reasonably well, we now compare these to for mean field variational inference (MFVI) and Monte Carlo dropout (MCD) without hyperparameter tuning. We implement MFVI according to using the local reparameterization trick . We set a unity prior variance and use a step size of 1e-3, using ADAM with approximately 25000 gradient steps and a batch size of 32. We allow the gradients to be estimated using 10 samples from the approximate posterior at each step. For testing we use 100 samples from the approximate posterior. For MCD, we follow the implementation in. We set the dropout rate to p = 0.05 with weight decay corresponding to unity prior variance. We again use a learning rate of 1e-3, using ADAM with approximately 25000 gradient steps using a batch size of 32. For testing we use 100 samples generated by the neural network. For the UCI datasets, we tabulate the test log likelihoods and test RMSEs for one-and two-layer architectures in Tables 13 and 14. These generally show reasonable values for each dataset despite the absence of hyperparameter tuning: there is no dataset for which either method can be said to do catastrophically badly. In fact, the for MFVI are largely competitive with the best we obtained for the neural linear models using hyperparameter tuning. This difference suggests that the reason hyperparameter tuning is important in the neural linear models is because it is necessary to carefully regularize the weights, whereas being approximately Bayesian over all of the weights is not as sensitive to the choice of hyperparameters. Although the train log likelihoods and RMSEs are not reported here, they confirm this intuition: the neural linear models still suffer from substantial overfitting, whereas the overfitting we observed for MFVI and MCD is far less. As with the for the UCI datasets with tuned hyperparameters, we compute average ranks for the models across all splits and use the Friedman test as described in Demšar to determine whether the differences are statistically significant. We plot the ranking using test log likelihoods in Figure 8 and using test RMSEs in Figure 9. The rankings show that MCD performs poorly on average in terms of test log likelihood, whereas MFVI performs reasonably well for both log likelihood and RMSE. However, these rankings: Average ranks of the single-run models on the UCI datasets according to test RMSEs, generated as described in Demšar. Model boston concrete energy kin8nm naval power protein wine yacht MFVI-1 3.78 ± 0.19 7.04 ± 0.33 4.30 ± 1.82 0.08 ± 0.00 0.03 ± 0.00 4.25 ± 0.12 5.02 ± 0.06 0.63 ± 0.01 1.30 ± 0.12 MFVI-2 3.70 ± 0.16 7.33 ± 0.25 2.58 ± 0.88 0.07 ± 0.00 0.03 ± 0.00 4.67 ± 0.23 5.05 ± 0.11 0.63 ± 0.01 1.26 ± 0.16 MCD-1 3.66 ± 0.12 8.00 ± 0.20 5.01 ± 1.72 0.12 ± 0.00 0.01 ± 0.00 4.87 ± 0.14 5.20 ± 0.05 0.65 ± 0.01 3.17 ± 0.55 MCD-2 3.58 ± 0.12 8.06 ± 0.24 5.18 ± 2.12 0.11 ± 0.00 0.02 ± 0.00 5.54 ± 0.64 5.18 ± 0.08 0.70 ± 0.01 3.75 ± 0.61 Table 16: Test RMSEs on the UCI Gap Datasets for MFVI and Monte Carlo Dropout do not take into account that the neural linear models fail catastrophically on some, but not all, datasets since they only take the ordering of the methods into account and not how well they perform. Therefore, we still argue that for the standard UCI datasets MFVI and MCD are better since they perform relatively well across all datasets without the need for hyperparameter tuning. Finally, we consider the performance of MFVI and MCD on the UCI gap datasets. We tabulate average test log likelihoods and test RMSEs in Tables 15 and 16. These echo the in Foong et al. (2019b), showing catastrophic failure of MFVI to express'in-between' uncertainty for the'energy' and'naval' datasets. They also show that MCD fails catastrophically on the'energy' datasets, as suggested by theoretical in Foong et al. (2019a); however, we believe these are the first in the literature that show catastrophic failure to express'in-between' uncertainty on a real dataset. The gap therefore show one crucial advantage of the neural linear models over MFVI and MCD: their ability to express'in-between' uncertainty. In this section, we briefly investigate the effect of slice sampling on the performance of the models. We first make plots of the predictive posterior distribution for each model trained on the toy problem of Section 3. These plots are visible in Figure 10. Note that the MAP-2 NL model simply becomes MAP inference. The most visible difference between Figure 10 and Figure 1 can be seen in the BN(BO)-2 NL model, which seems to have gained certainty at the edges while perhaps becoming slightly more uncertain in the gap; however, the effect in the gap is almost negligible. We observe the opposite effect in the MAP-2 NL model. Additionally, the Reg-2 NL model becomes slightly smoother. In general, however, it would seem that the effect of slice sampling for the toy problem is small. We then plot the differences in the log likelihoods and RMSEs between the full models (with slice sampling) and the equivalent models without the final slice sampling step, to observe any quantitative differences. These plots are shown in Figure 11 for the UCI datasets and Figure 12 for the UCI gap datasets. These plots do not give a clear picture of whether slice samping improves or worsens the performance of these models: it seems to depend on both the model and the dataset. To gain a clearer insight into whether slice sampling improves the performance of the neural linear models, we once again perform the Wilcoxon signed-rank test to compare the obtained with slice sampling to those without. The of this analysis is shown in Table 17. This shows that the majority of models are in fact improved by slice sampling, particularly when the improvement is measured in terms of the log likelihoods. However, in many cases, particularly when performance is measured in terms of RMSE, the effect of slice sampling is not statistically significant. Furthermore, it seems that performance for the Reg-L NL and BN(BO)-2 NL models may be worsened by slice sampling. In , in most cases performance will not be worsened by slice sampling. In particular, the MAP-L NL models seem to benefit especially from slice sampling. However, slice sampling is likely detrimental to the Reg-L NL models in all cases and potentially harmful for the BN(BO)-2 NL model when it comes to in-between uncertainty. | We benchmark the neural linear model on the UCI and UCI "gap" datasets. | 646 | scitldr |
The reproducibility of reinforcement-learning research has been highlighted as a key challenge area in the field. In this paper, we present a case study in reproducing the of one groundbreaking algorithm, AlphaZero, a reinforcement learning system that learns how to play Go at a superhuman level given only the rules of the game. We describe Minigo, a reproduction of the AlphaZero system using publicly available Google Cloud Platform infrastructure and Google Cloud TPUs. The Minigo system includes both the central reinforcement learning loop as well as auxiliary monitoring and evaluation infrastructure. With ten days of training from scratch on 800 Cloud TPUs, Minigo can play evenly against LeelaZero and ELF OpenGo, two of the strongest publicly available Go AIs. We discuss the difficulties of scaling a reinforcement learning system and the monitoring systems required to understand the complex interplay of hyperparameter configurations. In March 2016, Google DeepMind's AlphaGo BID0 defeated world champion Lee Sedol by using two deep neural networks (a policy and a value network) and Monte Carlo Tree Search (MCTS) to synthesize the output of these two neural networks. The policy network was trained via supervised learning from human games, and the value network was trained from a much larger corpus of synthetic games generated by sampling game trajectories from the policy network. AlphaGo Zero BID1, published in October 2017, described a continuous pipeline, which when initialized with random weights, could train itself to defeat the original AlphaGo system. The requirement for expert human data was replaced with a requirement for vast amounts of compute: approximately two thousand TPUs were used for 72 hours to train AlphaGo Zero to its full strength. AlphaZero BID2 presents a refinement of the AlphaGoZero pipeline, notably removing the gating mechanism for publishing new models. In many ways, AlphaGo Zero can be seen as the logical culmination of fully automating and streamlining the bootstrapping process: the original AlphaGo system was bootstrapped from expert human data and reached a final strength that was somewhat stronger than the best humans. Then, by generating new training data with the stronger AlphaGo system and repeating the bootstrap process, an even stronger system was created. By automating the bootstrapping process until it is continuous, a system is created that can train itself to surpass human levels of play, even when starting from random play. In this paper, we discuss our experiences creating Minigo. About half of our effort went into rebuilding the infrastructure necessary to coordinate a thousand selfplay workers. The other half of the effort went into monitoring infrastructure to test and verify that what we had built was bug-free. Despite having at hand a paper describing the final architecture of AlphaZero, we rediscovered the hard way which components of the system were absolutely necessary to get right, and which components we could be messy with. It stands to reason that without the benefit of pre-existing work, monitoring systems are even more important in the discovery process. We discuss in particular, At the heart of Minigo is the reinforcement learning loop as described in the AlphaZero BID2 paper. (See the Appendix for a full comparison of AlphaGo Zero, AlphaZero, and Minigo). Briefly, selfplay with the current generation of network weights is used to generate games, and those games are used as training data to produce the next generation of network weights. In each selfplay game. Minigo uses a variant of the UCT algorithm as described in the AlphaGo Zero paper to select a new variation from the game tree. The neural network considers the variation and predicts the next move, as well as the likelihood that one player will win the game. These predictions are integrated into the search tree, and the updated statistics are used to select the next variation to explore. A move is picked by either taking a weighted sample (first 30 moves) or picking the most visited variation. This is repeated until the game ends, one player resigns, or a move cap is reached. The final selected move thus takes into consideration the other player's responses and possible game continuations, and the visit counts can be used directly as a training target for the policy network. Additionally, the final game can also be used as a training target for the value network. Each game's data (position, visit counts, game ) is then used to update the neural network's weights by stochastic gradient descent (SGD), simultaneously minimizing the policy and value error. The number of readouts (800 for Minigo and AlphaZero) invested into each move roughly determines the ratio of compute required for selfplay and training. Minigo uses 800 Cloud TPUs for selfplay and 1 Cloud TPU for training. To orchestrate the many Cloud TPUs required for selfplay, we used Google Kubernetes Engine (GKE) to deploy many copies of a selfplay binary written in C++. Each selfplay worker writes training data directly to Cloud BigTable (CBT), with one row being one (position, visit counts, game ) tuple. The trainer trains on samples from a sliding window of training data, and then publishes network weights to Google Cloud Storage (GCS). The selfplay workers periodically look for and download an updated set of network weights, closing the loop. As part of our monitoring, a calibration job periodically processes game statistics and updates the hyperparameters used in selfplay, to ensure that the quality of selfplay data remains high. Additionally, we use StackDriver to monitor the health of the selfplay cluster and compute bulk statistics over all of the games being played. For example, we would log and track statistics about the distribution of value head outputs, winrates by color, distribution of game lengths, statistics about MCTS search depth & breadth, distributions of our time spent per inference, and the resignation threshold rate. We used TensorBoard to monitor the training job, keeping track of statistics like policy loss, value loss, regularization loss, top-1/top-3 policy accuracy, the magnitude of the value output, magnitude of weight updates, and entropy of policy output, all measured over the training selfplay data, heldout selfplay data, and human professional data. An evaluation cluster continually plays different generations against each other to determine their relative strengths. Finally, we created a frontend (https://cloudygo.com) that would allow us to check various metrics at a glance, including data such as the relative ratings of each model, common opening patterns, responses to various challenging positions, game lengths, and the percentage of games that branched due to early game softpick. This frontend also served as a way to quickly spot-check selfplay games and evaluation games. We describe the evolution of the core components of Minigo, from project inception to their current state. Minigo's selfplay is by far the most expensive part of the whole pipeline (100-1000 times as much compute as training), so many of our efforts were focused on improving the throughput of our selfplay cluster. We initially prototyped Minigo with a python-based engine, playing Go on a 9x9 board. BID0 The size of this network (9 residual blocks of 32 filters) was easily evaluated without a GPU, and a small cluster of only a few hundred cores was able to keep up with a single P100 GPU running training. We used Google Kubernetes Engine to constantly run the containers, using its'batch job' API to take care of scheduling, retrying, cleaning up dead pods, etc. Each docker container would start up, read the latest model from GCS, use it to play one game, and then write out the training data to a different directory in GCS, randomly picking either a training or holdout directory. This iteration of Minigo was able to start with random play and eventually reached a point where it was playing at a strong amateur level. However, given the lack of a professional human 9x9 dataset, and our lack of expertise in evaluating proper 9x9 play, it was difficult for us to understand how much (or how little) we had accomplished. With our proof of concept in hand, we moved to a 19x19 version of the run, scaling up the depth and width of our network. It was immediately clear that we could not rely on a CPU-only implementation: although we initially moved to an intermediate-sized network for our 19x19 run, the full network would be 512 times 2 slower to evaluate than the network we had used for our 9x9 trial run. We started with 2000 NVIDIA K80 preemptible GPUs, making use of Google Kubernetes Engine to dynamically scale load. GKE additionally had an easy solution for managing the installation of NVIDIA drivers into many containers, so this saved us a great deal of trouble. We also implemented a vectorized MCTS that would minimize the Python overhead involved in tree search. BID0 The computational complexity scales to some large polynomial complexity at least O(N 6): N 2 for the board area covered by convolution operations; N 2 for the approximate length of a game; N for the depth of network required; N − N 2 for the number of reads in the game tree. BID1 Moving from a depth of 9 to 20 residual layers is 2x; moving from a width of 32 to 256 is 64x; and convolving over a 19x19 board instead of a 9x9 board is 4x, for a grand total of 512x It is useful to briefly describe several batching techniques that are available to us. As batching was the easiest way to scale up the throughput of our selfplay workers, it was the main driver behind various design decisions. The easiest batching technique is to run multiple games in parallel. However, this comes with the drawback of increasing the latency of game completion: completing 16 games in 10 minutes is not necessarily better than completing 1 game in 1 minute. An alternative batching technique is virtual losses in MCTS BID7. Virtual losses is a technique used to select multiple variations from a single MCTS game tree. To reiterate, MCTS is a tree algorithm that determines the best leaf to explore using a combination of probability priors (in this case from the 'policy' network) and the outcome of simulations from that node (in this case, from the 'value' network). Once the leaf is chosen, expanding it with an inference is slow, leaving an opportunity to do additional work. Unfortunately, MCTS deterministically identifies the'best' leaf to expand, and has no notion of'second best' leaf. To choose additional leaves, we can mark the originally chosen leaf as having been a loss, and then rerun selection to get another leaf. When the of inference come back, the loss is replaced with the true evaluation. The number of simultaneous pending virtual losses is a tuneable parameter -increasing this number improves throughput but degrades selfplay quality. Using our K80 cluster, we ran the full Minigo pipeline 3 times as we worked out various bugs and added new infrastructure. However, we wished to see if the newly released Cloud TPUs (now in alpha on GKE) would yield improved performance. At the time, our Python engine had about 5% overhead on K80s, but we estimated that with TPUs, the python engine overhead would expand to 25% overhead or more. At this point, we rewrote our selfplay engine in C++, with an eye towards integrating MCTS very closely with Cloud TPU inference. With a modest number of TPUs, we could achieve a full run in about a week, assuming we could drive them near their theoretical limit. For maximal throughput, Cloud TPUs demanded even larger batch sizes than what we'd been able to produce with virtual losses. To generate this throughput, we played multiple games simultaneously; each selfplay worker utilized one Cloud TPU to play 32 games in parallel at virtual losses = 8. Each game took 15 minutes on average, leading to about 2 completed games per minute per Cloud TPU.With simultaneous games being played, we had to deal with the ragged edge problem -since not all games are the same length, when a game ended, we could either immediately start a new game, or suffer reduced throughput on the TPU. And if we continually started new games, what would we do when a new model was published? Our solution was to just switch models whenever a new one was published even if it happened in the middle of the game. We attempted to keep the pipeline balanced such that this happened only once a game, on average. Empirically, this appears not to have hurt the pipeline's performance. One operational risk of this approach is that each resign threshold is calibrated to a particular model. If a new model significantly skewed value output compared to its predecessor, then many games near the resign threshold can all simultaneously resign. This would trigger a thundering herd problem as many games complete simultaneously and would possibly overload Cloud BigTable with heavy write volume. Worse still, since the resign threshold must be calibrated from resignation-disabled games (which are played to completion and therefore take 2-3x time to complete), entire games could be played with the wrong resignation threshold. To solve these issues, we set aside a small number of TPUs specifically for playing calibration matches with a much lower parallel game configuration, to minimize game completion latency. A ringbuffer was used to store the most recent calibration and compute the threshold, and the buffer size kept small to ensure rapid updates. With these changes, and with the help of Kubernetes, we were able to spin up over 100 petaops of compute, and flexibly and efficiently shift them around the load patterns in Google's cloud. The cluster of workers was able to achieve about 1.3-1.5ms per inference, playing about 1.8M games per day with the'full size' 20 block network. The training part of the Minigo system has much in common with many supervised learning tasks. The only difference is that instead of training on a predefined dataset, we train on a continually evolving dataset. As TensorFlow is tailored towards a fixed input pipeline configuration, we found it easiest to restart our TensorFlow training process every time we wanted to shift our sliding window of training data. Other reinforcement learning frameworks like TF Agents BID8 address this problem by maintaining an in-memory circular buffer of training data, but due to our shuffling requirements (discussed below), this was not feasible for us. From a systems perspective, we also designed our trainer system to pause if our selfplay cluster slowed down or stopped for any reason -otherwise, the trainer would overfit on the same data, necessitating a rollback intervention. Other than the frequent restarts, our trainer followed best practices for supervised learning. For example, we'd originally used a custom data storage format designed for optimized compression of Go game data. While this worked well for our single-machine prototypes, we rewrote our training code into separate input pipeline and model code with TensorFlow Datasets and Estimator to best take advantage of Google Cloud TPUs. By using TensorFlow Datasets instead of Python iteration, we could make use of TensorFlow's parallel, buffered I/O features to directly feed data to the Cloud TPUs, first by reading directly from GCS, then by reading directly from Cloud BigTable. Shuffling turned out to be an unexpectedly tricky and important part of our pipeline. A major complicating factor for shuffling was that AlphaGo Zero specified a large trailing window of 5e5 games (roughly 1e8 positions or 2e12 bytes, assuming a featurization of 19x19x17 array of float32) from which positions should be uniformly sampled. Ideally, sampling should be uniform; every SGD minibatch should be uniformly drawn from the entire dataset. Such a large dataset can be sharded to different degrees; if there are many smaller files, then uniform sampling requires the overhead of opening many different small files for each minibatch, and if there are a few large files, there is an overhead associated with seeking to the right position. (While each of our positions were fixed-length, TensorFlow's TFExample format is variable-length, so TensorFlow's APIs did not allow jumping to a specified offset.)In practice, an approximation to uniform shuffling is required. At different points in Minigo's history, we used different approximations, each of which had different drawbacks. We found that every time we improved our shuffling, Minigo's strength would improve dramatically. The AlphaZero paper also reports that without using the 8-fold symmetry of the Go board to augment the training data, about 10x as many games needed to be played to reach the same strength. We believe that Minigo's sensitivity to proper shuffling arises from Go's gameplay pattern of placing stones sequentially. At a high level of play, stones are placed and rarely removed from the board, and thus specific subpatterns will persist through an entire game. Since every position will share the same training target of +/-1 for the value network, it is easy for the neural network to overfit by simply memorizing each position. We first noticed our lack of adequate shuffling when Minigo became extremely overconfident about games it played, as measured by the magnitude of its value network output in games against humans. We had also noticed that Minigo's performance on predicting outcomes of human professional games had dropped, but it was difficult to understand whether this was due to human professionals not playing perfectly or because Minigo was overfitting its value head. We obtained a conclusive answer when we started setting aside 5% of our selfplay games for validating our policy and value accuracy. Our network showed excellent performance on selfplay games which it had trained on, but near-random performance for selfplay games it had not trained on. All in all, we learned that to shuffle effectively, we needed to adequately scramble each source of correlation in our data. We had two primary sources of correlation: intra-game correlation (every position from the same game shared the same ), and generational correlation (every game played by a given set of network weights would be of a similar 'style').In our early shuffler implementations, we started from hundreds of thousands of tiny files, containing 100 to 1000 positions each. These files represented the raw output of thousands of selfplay workers as soon as they completed playing a set of games. Our first implementation, consisted of reading sequentially through the last million games, sampling 2% of positions, and emitting a series of chunks of 2048 positions each. We would then train on these chunks in shuffled order. This method failed on both shuffling criteria: each chunk contained on average, more than 1 position from the same game, and each chunk consisted of games from the same generation. To fix both issues, we utilized a machine with 64GB memory to perform a perfect shuffle on positions uniformly sampled from the last 5e5 games. Even then, we observed a large boost in strength when we randomly additionally applied one of 8 symmetries of the Go board to each selected position. This shuffler implementation served us up until we replaced our cluster of K80 GPUs with Cloud TPUs. When this happened, our single-threaded shuffler reading TFExample files from GCS became the new bottleneck. We patched our implementation with a Python multiprocessing pool, which bought us enough time to implement a Cloud BigTable solution. With CBT, our selfplay cluster would directly write one row into CBT for each position. In our Cloud BigTable pipeline, each TFExample was written to its own sequential row, with an individual row index determined by its game number and move number within the game, so that each individual move in the entire list of games could be randomly accessed. The sequential ordering was necessary because CBT's scan range operations are lexicographic, and we needed to be able to sample moves from a predictable game range (the last N games played by the cluster).Cloud Bigtable provides probabilistic sampling as part of its server-side API. The lower the probability, the better the savings in bandwidth, since only the selected rows are transferred. Computing the correct sampling probability required some bookkeeping, since the number of moves in a game can vary, and there is no method to count the rows in a CBT lexicographic row range aside from iterating through them. Therefore, the move count per game was stored in a separate keyspace in the table, so that for any given game range, the total moves in that range could be calculated, and from that total, the correct sampling probability (typically around 1.5%).We then executed a final in-memory shuffle of the sampled row keys before requesting the row keys from CBT. Training ran in parallel with consumption of shuffled data, decreasing the overall training time by 50%, and removing the input pipeline as the bottleneck. Another unexpectedly tricky part of the Minigo pipeline was our calibration process. The AlphaGo Zero paper described using a resignation threshold to terminate selfplay when both the current position's evaluation was heavily in favor of one side. This early termination is valuable in saving compute, as Go has an onerous game-end condition requiring playing the game out for many hundreds of moves. BID2 To determine this resignation threshold, the AlphaZero paper specified that 10% of games should be played to completion, and a threshold chosen such that fewer than 5% of games would have been incorrectly resigned. We discovered that this early termination had several second-order effects on the Minigo system. The most severe consequence was that an incorrectly calibrated resignation threshold could destabilize training by creating a pessimistic loop when a game was prematurely resigned. When that resignation was subsequently used as training data, it would be even more likely that the network would resign if presented with the same position. This only occurred occasionally, usually due to a different bug, but such a pessimistic loop would cause training to diverge. In order to detect this condition, we tracked statistics on winrate by color, distribution of game lengths, and the average magnitude of the value network's output during selfplay. A pessimistic loop would typically in a heavy skew of winrate, drastically shorter games, and a very confident value network output but a very high value error on holdout games. To avoid a pessimistic loop scenario, we started to calibrate our resignation threshold to a more conservative 3% false positive rate, even if it meant playing longer games. Additionally, we invested in lowering the latency involved in computing the resignation threshold. Originally, our resignation threshold was computed by running a custom script over the most recent resignation-disabled games. This script was executed by hand, and the resignation threshold configuration propagated by pushing a new flagfile to GCS. Being human dependent meant that the script was not run quite as often as it should have been, leading to our threshold being fairly conservative and having the false positive rate under 5%. Eventually, with our Cloud BigTable rewrite, we could compute the resignation threshold by directly querying CBT. We updated our configuration mechanism to be more lightweight and automated the calculation, and the lowered latency improved the robustness of the early stages of our pipeline, where the network was rapidly learning the basics of the game. Another consequence of early termination was that the distribution of training data shifted towards early-and mid-game examples. Amusingly, this meant that our network would make basic mistakes in end-game positions when we ran test matches against humans, because there were almost no end game positions in the training data. We discovered this by posting our bot on an online Go server, where its human opponents had no preconceived notions of what the bot was "supposed" to do. We also set up a BigQuery dataset of moves played, and crafted a SQL query that would detect instances of games where the estimated winrate was 99%+ in favor of one side before suddenly flipping to 99% in favor of the other side. To fix our end-game deficiency, we also trained on the resignation-disabled games so that there would be end-game positions to learn from. We allocated a small cluster of GPUs to play evaluation games between different models. Evaluation games disabled all of the settings that encourage exploration, like dirichlet noise and temperature. By using a modified Bradley-Terry system as implemented by the Python choix package, we computed a single parameter rating which we convert to the more familiar Elo number for display purposes. Measuring the relative strength of our models was useful for several reasons: -To figure out when a run has stopped improving, -To find the strongest models in a run. (Without gating, strength is not guaranteed to improve between each model, and in practice it was observed that successive models could have rating swings of up to 900 Elo.) -To compare our rating-over-time graphs to those shown in the AG / AGZ / AZ papers. Our evaluation methodology differed in one important respect from the methodology described by AlphaGo Zero. There, evaluation was used as a method of gating the promotion of new models, and each model would only play against the currently active model. This methodology potentially suffers from transitive cycles where A beats B; B beats C, and C beats A. Our evaluation methodology used a more diverge array of opponents, playing model N versus models N-1, N-2, N-3, N-5, N-10, N-20, and N-50. Thus, as the run proceeded, each model would play a symmetric set of pairs of models that came ahead and before it. Additional games were chosen to reduce the uncertainty in each model's rating for models that had not played similar strength models. As we experimented and made improvements, it became important to assess models from different runs. Cross-run evaluation games were played between the strongest models of each run to determine the impact of our hyperparameter changes and pipeline bug fixes on overall strength. We obtained the most reliable cross-run ratings by taking the best models at regular intervals from each run, and then playing an equal number of games for all n-choose-2 pairings. Beyond the use in evaluating training runs, the evaluation cluster was used for a number of one-off tests, such as tuning various hyper parameters (pUCT, virtual loss size, Q initialization), testing against other networks trained via completely different approaches, (e.g. supervised, transfer learning, or alternate AlphaGo reproductions), and playing matches against external reference points. In reproducing AlphaGo Zero, we found it useful to monitor nearly everything we could think of, to prove to ourselves that our implementation was bug-free. We found Figure 3 from the original AlphaGo Zero paper to be a very useful guidepost, and wish that we had more metrics to compare against -a richer array of training metrics on e.g. policy entropy, value confidence, policy confidence would not be overly burdensome to provide, and would have greatly aided our reproducion efforts. And even if the raw data were not provided, it would have been valuable to know which metrics we should monitor. We also found that healthy metrics were necessary but not sufficient for success. Our evaluation cluster was the only true indicator for success, and was of tremendous help in letting us know that a change in our hyperparameter settings had had a positive effect. | We reproduced AlphaZero on Google Cloud Platform | 647 | scitldr |
Generative adversarial networks (GANs) train implicit generative models through solving minimax problems. Such minimax problems are known as nonconvex- nonconcave, for which the dynamics of first-order methods are not well understood. In this paper, we consider GANs in the type of the integral probability metrics (IPMs) with the generator represented by an overparametrized neural network. When the discriminator is solved to approximate optimality in each iteration, we prove that stochastic gradient descent on a regularized IPM objective converges globally to a stationary point with a sublinear rate. Moreover, we prove that when the width of the generator network is sufficiently large and the discriminator function class has enough discriminative ability, the obtained stationary point corresponds to a generator that yields a distribution that is close to the distribution of the observed data in terms of the total variation. To the best of our knowledge, we seem to first establish both the global convergence and global optimality of training GANs when the generator is parametrized by a neural network. The file iclr2020_conference.pdf contains these instructions and illustrates the various formatting requirements your ICLR paper must satisfy. Submissions must be made using L A T E X and the style files iclr2020_conference.sty and iclr2020_conference.bst (to be used with L A T E X2e). The file iclr2020_conference.tex may be used as a "shell" for writing your paper. All you have to do is replace the author, title, abstract, and text of the paper with your own. The formatting instructions contained in these style files are summarized in sections 2, 3, and 4 below. The text must be confined within a rectangle 5.5 inches (33 picas) wide and 9 inches (54 picas) long. The left margin is 1.5 inch (9 picas). Use 10 point type with a vertical spacing of 11 points. Times New Roman is the preferred typeface throughout. Paragraphs are separated by 1/2 line space, with no indentation. Paper title is 17 point, in small caps and left-aligned. All pages should start at 1 inch (6 picas) from the top of the page. Authors' names are set in boldface, and each name is placed above its corresponding address. The lead author's name is to be listed first, and the co-authors' names are set to follow. Authors sharing the same address can be on the same line. Please pay special attention to the instructions in section 4 regarding figures, tables, acknowledgments, and references. The recommended paper length is 8 pages, with unlimited additional pages for citations. There will be a strict upper limit of 10 pages for the main text. Reviewers will be instructed to apply a higher standard to papers in excess of 8 pages. Authors may use as many pages of appendices (after the bibliography) as they wish, but reviewers are not required to read these. These instructions apply to everyone, regardless of the formatter being used. Citations within the text should be based on the natbib package and include the authors' last names and year (with the "et al." construct for more than two authors). When the authors or the publication are included in the sentence, the citation should not be in parenthesis using \citet{} (as in " for more information."). Otherwise, the citation should be in parenthesis using \citep{} (as in "Deep learning shows promise to make progress towards AI ."). The corresponding references are to be listed in alphabetical order of authors, in the REFERENCES section. As to the format of the references themselves, any style is acceptable as long as it is used consistently. Indicate footnotes with a number 1 in the text. Place the footnotes at the bottom of the page on which they appear. Precede the footnote with a horizontal rule of 2 inches (12 picas). Make sure the figure caption does not get separated from the figure. Leave sufficient space to avoid splitting the figure and figure caption. You may use color figures. However, it is best for the figure captions and the paper body to make sense if the paper is printed either in black/white or in color. The Hessian matrix of f at input point x f (x)dx Definite integral over the entire domain of x S f (x)dx Definite integral with respect to x over the set S Probability and Information Theory P (a) A probability distribution over a discrete variable p(a) A probability distribution over a continuous variable, or over a variable whose type has not been specified H (Positive part of x, i.e., max(0, x) 1 condition is 1 if the condition is true, 0 otherwise Do not change any aspects of the formatting parameters in the style files. In particular, do not modify the width or length of the rectangle the text should fit into, and do not change font sizes (except perhaps in the REFERENCES section; see below). Please note that pages should be numbered. Please prepare PostScript or PDF files with paper size "US Letter", and not, for example, "A4". The -t letter option on dvips will produce US Letter files. Consider directly generating PDF files using pdflatex (especially if you are a MiKTeX user). PDF figures must be substituted for EPS figures, however. Otherwise, please generate your PostScript and PDF files with the following commands: dvips mypaper.dvi -t letter -Ppdf -G0 -o mypaper.ps ps2pdf mypaper.ps mypaper.pdf Most of the margin problems come from figures positioned by hand using \special or other commands. We suggest using the command \includegraphics from the graphicx package. Always specify the figure width as a multiple of the line width as in the example below using.eps graphics A number of width problems arise when LaTeX cannot properly hyphenate a line. Please give LaTeX hyphenation hints using the \-command. If you'd like to, you may include a section for author contributions as is done in many journals. This is optional and at the discretion of the authors. A APPENDIX You may include other additional sections here. | We establish global convergence to optimality for IPM-based GANs where the generator is an overparametrized neural network. | 648 | scitldr |
We present network embedding algorithms that capture information about a node from the local distribution over node attributes around it, as observed over random walks following an approach similar to Skip-gram. Observations from neighborhoods of different sizes are either pooled (AE) or encoded distinctly in a multi-scale approach (MUSAE). Capturing attribute-neighborhood relationships over multiple scales is useful for a diverse range of applications, including latent feature identification across disconnected networks with similar attributes. We prove theoretically that matrices of node-feature pointwise mutual information are implicitly factorized by the embeddings. Experiments show that our algorithms are robust, computationally efficient and outperform comparable models on social, web and citation network datasets. Figure 1: Phenomena affecting and inspiring the design of the multi-scale attributed network embedding procedure. In Figure 1a attributed nodes D and G have the same feature set and their nearest neighbours also exhibit equivalent sets of features, whereas features at higher order neighbourhoods differ. Figure 1b shows that as the order of neighbourhoods considered (r) increases, the product of the adjacency matrix power and the feature matrix becomes less sparse. This suggests that an implicit decomposition method would be computationally beneficial. Our key contributions are: 1. to introduce the first Skip-gram style embedding algorithms that consider attribute distributions over local neighborhoods, both pooled (AE) and multi-scale (MUSAE), and their counterparts that attribute distinct features to each node (AE-EGO and MUSAE-EGO); 2. to theoretically prove that their embeddings approximately factorize PMI matrices based on the product of an adjacency matrix power and node-feature matrix; 3. to show that popular network embedding methods DeepWalk and Walklets are special cases of our AE and MUSAE; 4. we show empirically that AE and MUSAE embeddings enable strong performance at regression, classification, and link prediction tasks for real-world networks (e.g. Wikipedia and Facebook), are computationally scalable and enable transfer learning between networks. We provide reference implementations of AE and MUSAE, together with the datasets used for evaluation at https://github.com/iclr2020/MUSAE. Efficient unsupervised learning of node embeddings for large networks has seen unprecedented development in recent years. The current paradigm focuses on learning latent space representations of nodes such that those that share neighbors (; ; ;), structural roles or attributes are located close together in the embedding space. Our work falls under the last of these categories as our goal is to learn similar latent representations for nodes with similar sets of features in their neighborhoods, both on a pooled and multi-scale basis. Neighborhood preserving node embedding procedures place nodes with common first, second and higher order neighbors within close proximity in the embedding space. Recent works in the neighborhood preserving node embedding literature were inspired by the Skip-gram model (a; b), which generates word embeddings by implicitly factorizing a shifted pointwise mutual information (PMI) matrix obtained from a text corpus. This procedure inspired DeepWalk , a method which generates truncated random walks over a graph to obtain a "corpus" from which the Skip-gram model generates neighborhood preserving node embeddings. In doing so, DeepWalk implicitly factorizes a PMI matrix, which can be shown, based on the underlying first-order Markov process, to correspond to the mean of a set of normalized adjacency matrix powers up to a given order . Such pooling of matrices can be suboptimal since neighbors over increasing path lengths (or scales) are treated equally or according to fixed weightings (a;); whereas it has been found that an optimal weighting may be task or dataset specific . In contrast, multi-scale node embedding methods such as LINE , GraRep and Walklets separately learn lower-dimensional node embedding components from each adjacency matrix power and concatenate them to form the full node representation. Such un-pooled representations, comprising distinct but less information at each scale, are found to give higher performance in a number of downstream settings, without increasing the overall number of free parameters . Attributed node embedding procedures refine ideas from neighborhood based node embeddings to also incorporate node attributes (equivalently, features or labels) (; ; ; ;). Similarities between both a node's neighborhood structure and features contribute to determining pairwise proximity in the node embedding space. These models follow quite different strategies to obtain such representations. The most elemental procedure, TADW , decomposes a convex combination of normalized adjacency matrix powers into a matrix product that includes the feature matrix. Several other models, such as SINE and ASNE , implicitly factorize a matrix formed by concatenating the feature and adjacency matrices. Other approaches such as TENE , formulate the attributed node embedding task as a joint non-negative matrix factorization problem in which node representations obtained from sub-tasks are used to regularize one another. AANE uses a similar network structure based regularization approach, in which a node feature similarity matrix is decomposed using the alternating direction method of multipliers. The method most similar to our own is BANE , in which the product of a normalized adjacency matrix power and a feature matrix is explicitly factorized to obtain attributed node embeddings. Many other methods exist, but do not consider the attributes of higher order neighborhoods (; ; ; ;). The relationship between our pooled (AE) and multi-scale (MUSAE) attributed node embedding methods mirrors that between graph convolutional neural networks (GCNNs) and multi-scale GCNNs. Widely used graph convolutional layers, such as GCN , GraphSage , GAT (Veličković et al., 2018), APPNP , SGCONV and ClusterGCN , create latent node representations that pool node attributes from arbitrary order neighborhoods, which are then inseparable and unrecoverable. In contrast, MixHop learns latent features for each proximity. We now define algorithms to learn node embeddings using the attributes of nearby nodes, that allows both node and attribute embeddings to be learned jointly. The aim is to learn similar embeddings for nodes that occur in neighbourhoods of similar attributes; and similar embeddings for attributes that often occur in similar neighbourhoods of nodes. Let G = (V, L) be an undirected graph of interest where V and L are the sets of vertices and edges (or links) respectively; and let F be the set of all possible node features (i.e. attributes). We define F v ⊆ F as the subset of features belonging to each node v ∈ V. An embedding of nodes is a mapping g: V → R d that assigns a d-dimensional representation g(v) (or simply g v) to each node v and is fully described by a matrix G ∈ R |V|×d. Similarly, an embedding of the features (to the same latent space) is a mapping h: F → R d with embeddings denoted h(f) (or simply h f), and is fully described by a matrix H ∈ R |F|×d. The Attributed Embedding (AE) procedure is described by Algorithm 1. We sample n nodes w 1, from which to start attributed random walks on G, with probability proportional to their degree (Line 2). From each starting node, a node sequence of length l is sampled over G (Line 3), where sampling follows a first order random walk. For a given window size t, we iterate over each of the first l − t nodes of the sequence termed source nodes w j (Line 4). For each source node, we consider the following t nodes as target nodes (Line 5). For each target node w j+r, we add the tuple (w j, f) to the corpus D for each target feature f ∈ F wj+r (Lines 6 and 7). We also consider features of the source node f ∈ F wj, adding each (w j+r, f) tuple to D (Lines 9 and 10). Running Skip-gram on D with b negative samples (Line 15) generates the d-dimensional node and feature embeddings. Algorithm 2: MUSAE sampling and training procedure 3.2 MULTI-SCALE ATTRIBUTED EMBEDDING The AE method (Algorithm 1) pools feature sets of neighborhoods at different proximities. Inspired by the performance of (unattributed) multi-scale node embeddings, we adapt the AE algorithm to give multi-scale attributed node embeddings (MUSAE). The embedding component of a node v ∈ V for a specific proximity r ∈ {1, ..., t} is given by a mapping g r: V → R d/t (assuming t divides d). Similarly, the embedding component of feature f ∈ F at proximity r is given by a mapping Concatenating gives a d-dimensional embedding for each node and feature. The Multi-Scale Attributed Embedding procedure is described by Algorithm 2. We again sample n starting nodes w 1 with a probability proportional to node degree (Line 2) and, for each, sample a node sequence of length l over G (Line 3) according to either a first or second order random walk. For a given window size t, we iterate over the first l − t (source) nodes w j of the sequence (Line 4) and for each source node we iterate through the t (target) nodes w j+r that follow (Line 5). We again consider each target node feature f ∈ F wj+r, but now add tuples (w j, f) to a sub-corpus D r → (Lines 6 and 7). We add tuples (w j+r, f) to another sub-corpus D r showed that the loss function of Skip-gram with negative sampling (SGNS) is minimized if the embedding matrices factorize a matrix of pointwise mutual information (PMI) of word co-occurrence statistics. Specifically, for a word dictionary V with |V| = n, SGNS (with b negative samples) outputs two embedding matrices W, C ∈ R d×n such that ∀w, c ∈ V:, where #(w, c), #(w), #(c) denote counts of word-context pair (w, c), w and c over a corpus D; and word embeddings w w, c c ∈ R d are columns of W and C corresponding to w and c respectively. as empirical estimates of p(w), p(c) and p(w, c) respectively shows: i.e. an approximate low-rank factorization of a shifted PMI matrix (low rank since typically d n). extended this to node embedding models that apply SGNS to a "corpus" generated from random walks over the graph. In the case of DeepWalk where random walks are first-order Markov, the joint probability distributions over nodes at different stages of a random walk can be expressed in closed form. A closed form then follows for the factorized PMI matrix. We show that AE and MUSAE implicitly perform analogous matrix factorizations. Notation: A ∈ R n×n denotes the adjacency matrix and D ∈ R n×n the diagonal degree matrix of a graph G, i.e. D w,w = deg(w) = v A w,v. We denote the volume of G by c = v,w A v,w. We define the binary attribute matrix F ∈ {0, 1} |V|×|F| by F w,f = 1 f ∈Fw, ∀w ∈ V, f ∈ F. For ease of notation, we let P = D −1 A and E = diag(1 DF), where diag indicates a diagonal matrix. Assuming G is ergodic:, w ∈ V is the stationary distribution over nodes, i.e. c −1 D = diag(p(w)); and c −1 A is the stationary joint distribution over consecutive nodes p(w j, w j+1). F w,f can be considered a Bernoulli parameter describing the probability p(f |w) of observing a feature f at a node w and so c −1 DF describes the stationary joint distribution p(f, w j) over nodes and features. Accordingly, P is the matrix of conditional distributions p(w j+1 |w j); and E is a diagonal matrix proportional to the probability of observing each feature at the stationary distribution p(f) (note that p(f) need not sum to 1, whereas p(w) necessarily must). We know that the SGNS aspect of MUSAE (Algorithm 2, Line 17) is minimized when the learned embeddings g Our aim is to express this factorization in terms of known properties of the graph G and its features. Lemma 1. The empirical statistics of node-feature pairs obtained from random walks give unbiased estimates of joint probabilities of observing feature f ∈ F r steps (i) after; or (ii) before node v ∈ V, as given by: Proof. See Appendix. Lemma 2. Empirical statistics of node-feature pairs obtained from random walks give unbiased estimates of joint probabilities of observing feature f ∈ F r steps either side of node v ∈ V, given by: Marginalizing gives unbiased estimates of stationary probability distributions of nodes and features: Theorem 1. MUSAE embeddings approximately factorize the node-feature PMI matrix: Proof. Lemma 3. The empirical statistics of node-feature pairs learned by the AE algorithm give unbiased estimates of mean joint probabilities over different path lengths as follows:.., t} and so |D s | = t −1 |D|. Combining with Lemma 2, the follows. Theorem 2. AE embeddings approximately factorize the pooled node-feature matrix: Proof. The proof is analogous to the proof of Theorem 1. Remark 1. DeepWalk is a corner case of AE with F = I |V|. That is, DeepWalk is equivalent to AE if each node has a single unique feature. Thus E = diag(1 DI) = D and, by Theorem 2, DeepWalk's embeddings factorize log c t (. Remark 2. Walklets is a corner case of MUSAE with F = I |V| . Thus, for r = 1, . . ., t, the embeddings of Walklets factorise log c Remark 3. Appending an identity matrix I to the feature matrices F of AE and MUSAE (denoted [F ; I]) adds a unique feature to each node. The ing algorithms, named AE-EGO and MUSAE-EGO, learn embeddings that, respectively, approximately factorize the node-feature PMI matrices: and log c t (Under the assumption of a constant number of features per source node and first-order attributed random walk sampling, the corpus generation has a runtime complexity of O(n l t x/y), where x = v∈V |F v | the total number of features across all nodes (including repetition) and y = |V| the number of nodes. Using negative sampling, the optimization runtime of a single asynchronous gradient descent epoch on AE and the joint optimization runtime of MUSAE embeddings is described by O(b d n l t x/y). If one does p truncated walks from each source node, the corpus generation complexity is O(p y l t x) and the model optimization runtime is O(b d p y l t x). Our later runtime experiments in Section 5 will underpin optimization runtime complexity discussed above. Corpus generation has a memory complexity of O(n l t x/y) while the same when generating p truncated walks per node has a memory complexity of O(p y l t x). Storing the parameters of an AE embedding has a memory complexity of O(y d) and MUSAE embeddings also use O(y d) memory. In order to evaluate the quality of created representations we test the embeddings on supervised downstream tasks such as node classification, transfer learning across networks, regression, and link prediction. Finally, we investigate how changes in the input size affect the runtime. For doing so we utilize social networks and web graphs that we collected from Facebook, Github, Twitch and Wikipedia. The data sources, collection procedures and the datasets themselves are described with great detail in Appendix B. In addition we tested our methods on citation networks widely used for model evaluation . Across all experiments we use the same hyperparameter settings of our own model, competing unsupervised methods and graph neural networks -these are respectively listed in Appendices C, E and F. We evaluate the node classification performance in two separate scenarios. In the first we do k-shot learning by using the attributed embedding vectors with logistic regression to predict labels on the Facebook, Github and Twitch Portugal graphs. In the second we test the predictive performance under a fixed size train-test split to compare against various embedding methods and competitive neural network architectures. In this experiment we take k randomly selected samples per class, and use the attributed node embeddings to train a logistic regression model with l 2 regularization and predict the labels on the remaining vertices. We repeated the above procedure with seeded splits 100 times to obtain robust comparable . From these we calculated the average of micro averaged F 1 scores to compare our own methods with other unsupervised node embedding procedures. We varied k in order to show the efficacy of the methods -what are the gains when the training set size is increased. These are plotted in Figure 2 for Facebook, Github and Twitch Portugal networks. Based on these plots it is evident that MUSAE and AE embeddings have little gains in terms of micro F 1 score when additional data points are added to the training set when k is larger than 12. This implies that our method is data efficient. Moreover, MUSAE-EGO and AE-EGO have a slight performance advantage, which means that including the nodes in the attributed random walks helps when a small amount of labeled data is available in the downstream task. Figure 2: Node classification k-shot learning performance as a function of training samples per class evaluated by average micro F 1 scores calculated from a 100 seeded train-test splits. In this series of experiments we created a 100 seeded train test splits of nodes (80% train -20% test) and calculated weighted, micro and macro averaged F 1 scores on the test set to compare our methods to various embedding and graph neural network methods. Across procedures the same random seeds were used to obtain the train-test split this way the performances are directly comparable. We attached these on the Facebook, Github and Twitch Portugal graphs as Table 6 of Appendix G. In each column red denotes the best performing unsupervised embedding model and blue corresponds to the strongest supervised neural model. We also attached additional supporting using the same experimental setting with the unsupervised methods on the Cora, Citeseer, and Pubmed graphs as Table 5 of Appendix G. In terms of micro F 1 score our strongest method outperforms on the Facebook and GitHub networks the best unsupervised method by 1.01% and 0.47% respectively. On the Twitch Portugal network the relative micro F 1 advantage of ASNE over our best method is 1.02%. Supervised node embedding methods outperform our and other unsupervised methods on every dataset for most metrics. In terms of micro F 1 this relative advantage over our best performing model variant is the largest with 4.67% on the Facebook network, and only 0.11% on Twitch Portugal. One can make four general observations based on our (i) multi-scale representations can help with the classification tasks compared to pooled ones; (ii) the addition of the nodes in the ego augmented models to the feature sets does not help the performance when a large amount of labeled training data is available; (iii) based on the standard errors supervised neural models do not necessarily have a significant advantage over unsupervised methods (see the on the Github and Twitch datasets); (iv) attributed node embedding methods that only consider first-order neighbourhoods have a poor performance. Neighbourhood based methods such as DeepWalk are transductive and the function used to create the embedding cannot map nodes that are not connected to the original graph to the latent space. However, vanilla MUSAE and AE are inductive and can easily map nodes to the embedding space if the attributes across the source and target graph are shared. This also means that supervised models trained on the embedding of a source graph are transferable. Importantly those attributed embedding methods such as AANE or ASNE that explicitly use the graph are unable to do this transfer. The blue reference line denotes the test performance on the target dataset in a non transfer learning scenario (standard hyperparameter settings and split ratio). The red reference line denotes the performance of random guesses. Using the disjoint Twitch country level social networks (inter country edges are not present) we did a transfer learning experiment. First, we learn an embedding function given the social network from a country with the standard parameter settings. Second, we train regularized logistic regression on the embedding to predict whether the Twitch user streams explicit content. Third, using the embedding function we map the target graph to the embedding space. Fourth, we use the logistic model to predict the node labels on the target graph. We evaluate the performance by the micro F 1 score based on 10 experimental repetitions. These averages with standard error bars are plotted for the Twitch Germany, England and Spain datasets as target graphs on Figure 3. We added additional with France, Portugal and Russia being the target country in Appendix H as Table 5. These support that MUSAE and AE create features that are transferable across graphs that share vertex features. For example, based on a comparison to non transfer-learning we find that the transfer between the German and English user graphs is effective in terms of micro F 1 score. Transfer from English users to German ones considerably improves performance, and the other way around there is a little gain. We also see that the upstream and downstream models that we trained on graphs with more vertices transfer well while transfer to the small ones is generally poor -most of the times worse than random guessing. There is no clear evidence that either MUSAE or AE gives better on this specific problem. We created embeddings of the Wikipedia webgraphs with all of our methods and the unsupervised baselines. Using a 80% train -20% test split we predict the log of average traffic for each page using an elastic net model. The hyperparameters of the downstream model are available in Appendix D. In Table 7 of Appendix I we report average test R 2 and standard error of the predictive performance over 100 seeded train-test splits. Our key observation are: (i) that MUSAE outperforms all benchmark neighbourhood preserving and attributed node embedding methods, with the strongest MUSAE variant outperforming the best baseline between 2.05% and 10.03% (test R 2); (ii) that MUSAE significantly outperforms AE by between 2.49% and 21.64% (test R 2); and (iii) the benefit of using the vertices as features (ego augmented model) can improve the performance of embeddings, but appears to be dataset specific phenomenon. The final series of experiments dedicated to the representation quality is about link prediction. We carried out an attenuated graph embedding trial to predict the removed edges from the graph. First, we randomly removed 50% of edges while the connectivity of the graph was not changed. Second, an embedding is created from the attenuated graph. Third, we calculate features for the removed edges and the same number of randomly selected pairs of nodes (negative candidates) with binary operators to create d-dimensional edge features. We use the binary operators applied by. Specifically, we calculated the average, element-wise product, element-wise l 1 norm and the element-wise l 2 norm of vectors. Finally, we created a 100 seeded 80% train -20% test splits and used logistic regression to predict whether an edge exists. We compared to attributed and neighbourhood based embedding methods and average AUC scores are presented in Tables 8 and 9 of Appendix J. Our show that Walklets the multi-scale neighbourhood based embedding method materially outperforms every other method on most of the datasets and attributed embedding methods generally do poorly in terms of AUC compared to neighbourhood based ones. In order to show the efficacy of our algorithms we run a series of experiments on synthetic graphs where we are able to manipulate the input size. Specifically, we look at the effect of changing the number of vertices and features per vertex. Our detailed experimental setup was as follows. Each point in Figure 4 is the mean runtime obtained from 100 experimental runs on Erdos-Renyi graphs. The base graph that we manipulated had 2 11 nodes, 2 3 edges and the same number of unique features per node uniformly selected from a feature set of 2 11. Our experimental settings were the same as the ones described in Appendix C except for the number of epochs. We only did a single training epoch with asynchronous gradient descent on each graph. We tested the runtime with 1, 2 and 4 cores and included a dashed line as the linear runtime reference in each subfigure. We observe that doubling the average number of features per vertex doubles the runtime of AE and MUSAE. Moreover, the number of cores used during the optimization does not decrease the runtime when the number of unique features per vertex compared to the cardinality of the feature set is large. When we look at the change in the vertex set size we also see a linear behaviour. Doubling the input size simply in a doubled optimization runtime. In addition, if one interpolates linearly from these it comes that a network with 1 million nodes, 8 edges per node, 8 unique features per node can be embedded with MUSAE on commodity hardware in less than 5 hours. This interpolation assumes that the standard parameter settings proposed in Appendix C and 4 cores were used for optimization. We investigated attributed node embedding and proposes efficient pooled (AE) and multi-scale (MUSAE) attributed node embedding algorithms with linear runtime. We proved that these algorithms implicitly factorize probability matrices of features appearing in the neighbourhood of nodes. Two widely used neighbourhood preserving node embedding methods Perozzi et al. (2014; are in fact simplified cases of our models. On several datasets (Wikipedia, Facebook, Github, and citation networks) we found that representations learned by our methods, in particular MUSAE, outperform neighbourhood based node embedding methods (; Our proposed embedding models are differentiated from other methods in that they encode feature information from higher order neighborhoods. The most similar previous model BANE encodes node attributes from higher order neighbourhoods but has non-linear runtime complexity and the product of adjacency matrix power and feature matrix is decomposed explicitly. A PROOFS Lemma 1. The empirical statistics of node-feature pairs obtained from random walks give unbiased estimates of joint probabilities of observing feature f ∈ F r steps (i) after; or (ii) before node v ∈ V, as given by: Proof. The proof is analogous to that given for Theorem 2.1 in. We show that the computed statistics correspond to sequences of random variables with finite expectation, bounded variance and covariances that tend to zero as the separation between variables within the sequence tends to infinity. The Weak Law of Large Numbers (S.N.Bernstein) then guarantees that the sample mean converges to the expectation of the random variable. We first consider the special case n = 1, i.e. we have a single sequence w 1,..., w l generated by a random walk (see Algorithm 1). For a particular node-feature pair (w, f), we let Y i, i ∈ {1, ..., l − t}, be the indicator function for the event w i = w and f ∈ F i+r. Thus, we have: the sample average of the Y i s. We also have: for j > i + r. This allows us to compute the covariance: where 1 is a vector of ones. The difference term (indicated) tends to zero as j − i → ∞ since then p(w j = w|w i+r) tends to the stationary distribution p(w) =, regardless of w i+r. Thus, applying the Weak Law of Large Numbers, the sample average converges in probability to the expected value, i.e.: A similar argument applies to In both cases, the argument readily extends to the general setting where n > 1 with suitably defined indicator functions for each of the n random walks (see). Lemma 2. Empirical statistics of node-feature pairs obtained from random walks give unbiased estimates of joint probabilities of observing feature f ∈ F r steps either side of node v ∈ V, given by: The final step follows by symmetry of A, indicating how the Lemma can be extended to directed graphs. Our method was evaluated on a variety of social networks and web page-page graphs that we collected from openly available API services. In Table 1 we described the graphs with widely used statistics with respect to size, diameter, and level of clustering. We also included the average number of features per vertex and unique feature count in the last columns. These datasets are available with the source code of MUSAE and AE at https://github.com/iclr2020/MUSAE. This webgraph is a page-page graph of verified Facebook sites. Nodes represent official Facebook pages while the links are mutual likes between sites. Node features are extracted from the site descriptions that the page owners created to summarize the purpose of the site. This graph was collected through the Facebook Graph API in November 2017 and restricted to pages from 4 categories which are defined by Facebook. These categories are: politicians, governmental organizations, television shows and companies. As one can see in Table 1 it is a highly clustered graph with a large diameter. The task related to this dataset is multi-class node classification for the 4 site categories. The largest graph used for evaluation is a social network of GitHub developers which we collected from the public API in June 2019. Nodes are developers who have starred at least 10 repositories and edges are mutual follower relationships between them. The vertex features are extracted based on the location, repositories starred, employer and e-mail address. The task related to the graph is binary node classification -one has to predict whether the GitHub user is a web or a machine learning developer. This target feature was derived from the job title of each user. As the descriptive statistics show in Table 1 this is the largest graph that we use for evaluation with the highest sparsity. The datasets that we use to perform node level regression are Wikipedia page-page networks collected on three specific topics: chameleons, crocodiles and squirrels. In these networks nodes are articles from the English Wikipedia collected in December 2018, edges are mutual links that exist between pairs of sites. Node features describe the presence of nouns appearing in the articles. For each node we also have the average monthly traffic between October 2017 and November 2018. In the regression tasks used for embedding evaluation the logarithm of average traffic is the target variable. Table 1 shows that these networks are heterogeneous in terms of size, density, and clustering. B.4 TWITCH DATASETS These datasets used for node classification and transfer learning are Twitch user-user networks of gamers who stream in a certain language. Nodes are the users themselves and the links are mutual friendships between them. Vertex features are extracted based on the games played and liked, location and streaming habits. Datasets share the same set of node features, this makes transfer learning across networks possible. These social networks were collected in May 2018. The supervised task related to these networks is binary node classification -one has to predict whether a streamer uses explicit language. In MUSAE and AE models we have a set of parameters that we use for model evaluation. Our parameter settings listed in Table 2 are quite similar to the widely used general settings of random walk sampled implicit factorization machines (; ; ;). Each of our models is augmented with a Doc2Vec (a; b) embedding of node features -this is done such way that the overall dimension is still 128. The downstream tasks uses logistic and elastic net regression from Scikit-learn for node level classification, regression and link prediction. For the evaluation of every embedding model we use the standard settings of the library except for the regularization and norm mixing parameters. These are described in Table 3. Our purpose was a fair evaluation compared to other node embedding procedures. Because of this each we tried to use hyperparameter settings that give similar expressive power to the competing (; ;) and number of dimensions. • DeepWalk : We used the hyperparameter settings described in Table 2. While the original DeepWalk model uses hierarchical softmax to speed up calculations we used a negative sampling based implementation. This way DeepWalk can be seen as a special case of Node2Vec when the second-order random walks are equivalent to the firs-order walks. • LINE 2 : We created 64 dimensional embeddings based on first and second order proximity and concatenated these together for the downstream tasks. Other hyperparameters are taken from the original work. • Node2Vec : Except for the in-out and return parameters that control the second-order random walk behavior we used the hyperparameter settings described in Table 2. These behavior control parameters were tuned with grid search from the {4, 2, 1, 0.5, 0.25} set using a train-validation split of 80% − 20% within the training set itself. • Walklets : We used the hyperparameters described in Table 2 except for window size. We set a window size of 4 with individual embedding sizes of 32. This way the overall number of dimensions of the representation remained the same. • The attributed node embedding methods AANE, ASNE, BANE, TADW, TENE all use the hyperparameters described in the respective papers except for the dimension. We parametrized these methods such way that each of the final embeddings used in the downstream tasks is 128 dimensional. Each model was optimized with the Adam optimizer with the standard moving average parameters and the model implementations are sparsity aware modifications based on PyTorch Geometric . We needed these modifications in order to accommodate the large number of vertex features -see the last column in Table 1. Except for the GAT model (Veličković et al., 2018) we used ReLU intermediate activation functions with a softmax unit in the final layer for classification. The hyperparameters used for the training and regularization of the neural models are listed in Table 4. Except for the APPNP model each baseline uses information up to 2-hop neighbourhoods. The model specific settings when we needed to deviate from the basic settings which are listed in Table 4 were as follows: • Classical GCN : We used the standard parameter settings described in this section. • GraphSAGE : We utilized a graph convolutional aggregator on the sampled neighbourhoods, samples of 40 nodes per source, and standard settings. • GAT (Veličković et al., 2018): The negative slope parameter of the leaky ReLU function was 0.2, we applied a single attention head, and used the standard hyperparameter settings. • MixHop : We took advantage of the 0 th, 1 st and 2 nd powers of the normalized adjacency matrix with 32 dimensional convolutional filters for creating the first hidden representations. This was fed to a feed-forward layer to classify the nodes. • ClusterGCN : Just as did, we used the METIS procedure . We clustered the graphs into disjoint clusters, and the number of clusters was the same as the number of node classes (e.g. in case of the Facebook page-page network we created 4 clusters). For training we used the earlier described setup. • APPNP : The top level feed-forward layer had 32 hidden neurons, the teleport probability was set as 0.2 and we used 20 steps for approximate personalized pagerank calculation. • SGCONV : We used the 2 nd power of the normalized adjacency matrix for training the classifier. I REGRESSION ON WIKIPEDIA PAGE-PAGE GRAPHS | We develop efficient multi-scale approximate attributed network embedding procedures with provable properties. | 649 | scitldr |
Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples. While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult. In this paper, we present 1) a consistent comparative analysis of several representative few-shot classification algorithms, with showing that deeper backbones significantly reduce the gap across methods including the baseline, 2) a slightly modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the mini-ImageNet and the CUB datasets, and 3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms. Our reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones. In a realistic, cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms. Deep learning models have achieved state-of-the-art performance on visual recognition tasks such as image classification. The strong performance, however, heavily relies on training a network with abundant labeled instances with diverse visual variations (e.g., thousands of examples for each new class even with pre-training on large-scale dataset with base classes). The human annotation cost as well as the scarcity of data in some classes (e.g., rare species) significantly limit the applicability of current vision systems to learn new visual concepts efficiently. In contrast, the human visual systems can recognize new classes with extremely few labeled examples. It is thus of great interest to learn to generalize to new classes with a limited amount of labeled examples for each novel class. The problem of learning to generalize to unseen classes during training, known as few-shot classification, has attracted considerable attention BID29; BID27; BID6; BID25; BID28; BID9; BID24. One promising direction to few-shot classification is the meta-learning paradigm where transferable knowledge is extracted and propagated from a collection of tasks to prevent overfitting and improve generalization. Examples include model initialization based methods BID25; BID6, metric learning methods BID29; BID27; BID28, and hallucination based methods BID0; BID11; BID31. Another line of work BID10; BID24 also demonstrates promising by directly predicting the weights of the classifiers for novel classes. Limitations. While many few-shot classification algorithms have reported improved performance over the state-of-the-art, there are two main challenges that prevent us from making a fair comparison and measuring the actual progress. First, the discrepancy of the implementation details among multiple few-shot learning algorithms obscures the relative performance gain. The performance of baseline approaches can also be significantly under-estimated (e.g., training without data augmentation). Second, while the current evaluation focuses on recognizing novel class with limited training examples, these novel classes are sampled from the same dataset. The lack of domain shift between the base and novel classes makes the evaluation scenarios unrealistic. Our work. In this paper, we present a detailed empirical study to shed new light on the few-shot classification problem. First, we conduct consistent comparative experiments to compare several representative few-shot classification methods on common ground. Our show that using a deep backbone shrinks the performance gap between different methods in the setting of limited domain differences between base and novel classes. Second, by replacing the linear classifier with a distance-based classifier as used in BID10; BID24, the baseline method is surprisingly competitive to current state-of-art meta-learning algorithms. Third, we introduce a practical evaluation setting where there exists domain shift between base and novel classes (e.g., sampling base classes from generic object categories and novel classes from fine-grained categories). Our show that sophisticated few-shot learning algorithms do not provide performance improvement over the baseline under this setting. Through making the source code and model implementations with a consistent evaluation setting publicly available, we hope to foster future progress in the field. 1 Our contributions.1. We provide a unified testbed for several different few-shot classification algorithms for a fair comparison. Our empirical evaluation reveal that the use of a shallow backbone commonly used in existing work leads to favorable for methods that explicitly reduce intra-class variation. Increasing the model capacity of the feature backbone reduces the performance gap between different methods when domain differences are limited.2. We show that a baseline method with a distance-based classifier surprisingly achieves competitive performance with the state-of-the-art meta-learning methods on both mini-ImageNet and CUB datasets.3. We investigate a practical evaluation setting where base and novel classes are sampled from different domains. We show that current few-shot classification algorithms fail to address such domain shifts and are inferior even to the baseline method, highlighting the importance of learning to adapt to domain differences in few-shot learning. Given abundant training examples for the base classes, few-shot learning algorithms aim to learn to recognizing novel classes with a limited amount of labeled examples. Much efforts have been devoted to overcome the data efficiency issue. In the following, we discuss representative few-shot learning algorithms organized into three main categories: initialization based, metric learning based, and hallucination based methods. Initialization based methods tackle the few-shot learning problem by "learning to fine-tune". One approach aims to learn good model initialization (i.e., the parameters of a network) so that the classifiers for novel classes can be learned with a limited number of labeled examples and a small number of gradient update steps BID6 BID22 BID26. Another line of work focuses on learning an optimizer. Examples include the LSTM-based meta-learner for replacing the stochastic gradient decent optimizer BID25 and the weight-update mechanism with an external memory BID21. While these initialization based methods are capable of achieving rapid adaption with a limited number of training examples for novel classes, our experiments show that these methods have difficulty in handling domain shifts between base and novel classes. Distance metric learning based methods address the few-shot classification problem by "learning to compare". The intuition is that if a model can determine the similarity of two images, it can classify an unseen input image with the labeled instances BID16. To learn a sophisticated comparison models, meta-learning based methods make their prediction conditioned on distance or metric to few labeled instances during the training process. Examples of distance metrics include cosine similarity BID29, Euclidean distance to class-mean representation BID27, CNN-based relation module BID28, ridge regression BID1, and graph neural network BID9. In this paper, we compare the performance of three distance metric learning methods. Our show that a simple baseline method with a distancebased classifier (without training over a collection of tasks/episodes as in meta-learning) achieves competitive performance with respect to other sophisticated algorithms. Besides meta-learning methods, both BID10 and BID24 develop a similar method to our Baseline++ (described later in Section 3.2). The method in BID10 learns a weight generator to predict the novel class classifier using an attentionbased mechanism (cosine similarity), and the BID24 directly use novel class features as their weights. Our Baseline++ can be viewed as a simplified architecture of these methods. Our focus, however, is to show that simply reducing intra-class variation in a baseline method using the base class data leads to competitive performance. Hallucination based methods directly deal with data deficiency by "learning to augment". This class of methods learns a generator from data in the base classes and use the learned generator to hallucinate new novel class data for data augmentation. One type of generator aims at transferring appearance variations exhibited in the base classes. These generators either transfer variance in base class data to novel classes BID11, or use GAN models BID0 to transfer the style. Another type of generators does not explicitly specify what to transfer, but directly integrate the generator into a meta-learning algorithm for improving the classification accuracy BID31. Since hallucination based methods often work with other few-shot methods together (e.g. use hallucination based and metric learning based methods together) and lead to complicated comparison, we do not include these methods in our comparative study and leave it for future work. Domain adaptation techniques aim to reduce the domain shifts between source and target domain BID23; BID8, as well as novel tasks in a different domain BID14. Similar to domain adaptation, we also investigate the impact of domain difference on fewshot classification algorithms in Section 4.5. In contrast to most domain adaptation problems where a large amount of data is available in the target domain (either labeled or unlabeled), our problem setting differs because we only have very few examples in the new domain. Very recently, the method in BID5 addresses the one-shot novel category domain adaptation problem, where in the testing stage both the domain and the category to classify are changed. Similarly, our work highlights the limitations of existing few-shot classification algorithms problem in handling domain shift. To put these problem settings in context, we provided a detailed comparison of setting difference in the appendix A1. In this section, we first outline the details of the baseline model (Section 3.1) and its variant (Section 3.2), followed by describing representative meta-learning algorithms (Section 3.3) studied in our experiments. Given abundant base class labeled data X b and a small amount of novel class labeled data X n, the goal of few-shot classification algorithms is to train classifiers for novel classes (unseen during training) with few labeled examples. Our baseline model follows the standard transfer learning procedure of network pre-training and fine-tuning. FIG0 illustrates the overall procedure. Training stage. We train a feature extractor f θ (parametrized by the network parameters θ) and the classifier C(·|W b) (parametrized by the weight matrix W b ∈ R d×c) from scratch by minimizing a standard cross-entropy classification loss L pred using the training examples in the base classes Fine-tuning stage. To adapt the model to recognize novel classes in the fine-tuning stage, we fix the pre-trained network parameter θ in our feature extractor f θ and train a new classifier C(.|W n) (parametrized by the weight matrix W n) by minimizing L pred using the few labeled of examples (i.e., the support set) in the novel classes X n. DISPLAYFORM0 In addition to the baseline model, we also implement a variant of the baseline model, denoted as Baseline++, which explicitly reduces intra-class variation among features during training. The importance of reducing intra-class variations of features has been highlighted in deep metric learning BID15 and few-shot classification methods BID10.The training procedure of Baseline++ is the same as the original Baseline model except for the classifier design. As shown in FIG0, we still have a weight matrix W b ∈ R d×c of the classifier in the training stage and a W n in the fine-tuning stage in Baseline++. The classifier design, however, is different from the linear classifier used in the Baseline. Take the weight matrix W b as an example. We can write the weight matrix W b as [w 1, w 2, ...w c], where each class has a d-dimensional weight vector. In the training stage, for an input feature f θ (x i) where x i ∈ X b, we compute its cosine similarity to each weight vector [w 1, · · ·, w c] and obtain the similarity scores DISPLAYFORM0 We can then obtain the prediction probability for each class by normalizing these similarity scores with a softmax function. Here, the classifier makes a prediction based on the cosine distance between the input feature and the learned weight vectors representing each class. Consequently, training the model with this distance-based classifier explicitly reduce intra-class variations. Intuitively, the learned weight vectors [w 1, · · ·, w c] can be interpreted as prototypes (similar to BID27 ; BID29) for each class and the classification is based on the distance of the input feature to these learned prototypes. The softmax function prevents the learned weight vectors collapsing to zeros. We clarify that the network design in Baseline++ is not our contribution. The concept of distancebased classification has been extensively studied in BID18 and recently has been revisited in the few-shot classification setting BID10; BID24. Here we describe the formulations of meta-learning methods used in our study. We consider three distance metric learning based methods (,). While meta-learning is not a clearly defined, BID29 considers a few-shot classification method as meta-learning if the prediction is conditioned on a small support set S, because it makes the training procedure explicitly learn to learn from a given small support set. As shown in FIG1, meta-learning algorithms consist of a meta-training and a meta-testing stage. In the meta-training stage, the algorithm first randomly select N classes, and sample small base support set S b and a base query set Q b from data samples within these classes. The objective is to train a classification model M that minimizes N-way prediction loss L N−way of the samples in the query set Q b. Here, the classifier M is conditioned on provided support set S b. By making prediction conditioned on the given support set, a meta-learning method can learn how to learn from limited labeled data through training from a collection of tasks (episodes). In the meta-testing stage, all novel class data X n are considered as the support set for novel classes S n, and the classification model M can be adapted to predict novel classes with the new support set S n.Different meta-learning methods differ in their strategies to make prediction conditioned on support set (see FIG1). For both and , the prediction of the examples in a query set Q is based on comparing the distance between the query feature and the support feature from each class. MatchingNet compares cosine distance between the query feature and each support feature, and computes average cosine distance for each class, while ProtoNet compares the Euclidean distance between query features and the class mean of support features. RelationNet BID28 shares a similar idea, but it replaces distance with a learnable relation module. The MAML method BID6 is an initialization based meta-learning algorithm, where each support set is used to adapt the initial model parameters using few gradient updates. As different support sets have different gradient updates, the adapted model is conditioned on the support set. Note that when the query set instances are predicted by the adapted model in the meta-training stage, the loss of the query set is used to update the initial model, not the adapted model. Datasets and scenarios. We address the few-shot classification problem under three scenarios: 1) generic object recognition, 2) fine-grained image classification, and 3) cross-domain adaptation. For object recognition, we use the mini-ImageNet dataset commonly used in evaluating few-shot classification algorithms. The mini-ImageNet dataset consists of a subset of 100 classes from the ImageNet dataset BID4 and contains 600 images for each class. The dataset was first proposed by BID29, but recent works use the follow-up setting provided by BID25, which is composed of randomly selected 64 base, 16 validation, and 20 novel classes. For fine-grained classification, we use CUB-200-2011 dataset BID30 (referred to as the CUB hereafter). The CUB dataset contains 200 classes and 11,788 images in total. Following the evaluation protocol of BID13, we randomly split the dataset into 100 base, 50 validation, and 50 novel classes. For the cross-domain scenario (mini-ImageNet →CUB), we use mini-ImageNet as our base class and the 50 validation and 50 novel class from CUB. Evaluating the cross-domain scenario allows us to understand the effects of domain shifts to existing few-shot classification approaches. Implementation details. In the training stage for the Baseline and the Baseline++ methods, we train 400 epochs with a batch size of 16. In the meta-training stage for meta-learning methods, we train 60,000 episodes for 1-shot and 40,000 episodes for 5-shot tasks. We use the validation set to select the training episodes with the best accuracy. 2 In each episode, we sample N classes to form N-way classification (N is 5 in both meta-training and meta-testing stages unless otherwise mentioned). For each class, we pick k labeled instances as our support set and 16 instances for the query set for a k-shot task. In the fine-tuning or meta-testing stage for all methods, we average the over 600 experiments. In each experiment, we randomly sample 5 classes from novel classes, and in each class, we also pick k instances for the support set and 16 for the query set. For Baseline and Baseline++, we use the entire support set to train a new classifier for 100 iterations with a batch size of 4. For meta-learning methods, we obtain the classification model conditioned on the support set as in Section 3.3.All methods are trained from scratch and use the Adam optimizer with initial learning rate 10 −3. We apply standard data augmentation including random crop, left-right flip, and color jitter in both the training or meta-training stage. Some implementation details have been adjusted individually for each method. For Baseline++, we multiply the cosine similarity by a constant scalar 2 to adjust original value range [-1,1] to be more appropriate for subsequent softmax layer. For MatchingNet, we use an FCE classification layer without fine-tuning in all experiments and also multiply cosine similarity by a constant scalar. For RelationNet, we replace the L2 norm with a softmax layer to expedite training. For MAML, we use a first-order approximation in the gradient for memory efficiency. The approximation has been shown in the original paper and in our appendix to have nearly identical performance as the full version. We choose the first-order approximation for its efficiency. We now conduct experiments on the most common setting in few-shot classification, 1-shot and 5-shot classification, i.e., 1 or 5 labeled instances are available from each novel class. We use a four-layer convolution backbone (Conv-4) with an input size of 84x84 as in BID27 and perform 5-way classification for only novel classes during the fine-tuning or meta-testing stage. To validate the correctness of our implementation, we first compare our to the reported numbers for the mini-ImageNet dataset in Table 1. Note that we have a ProtoNet #, as we use 5-way classification in the meta-training and meta-testing stages for all meta-learning methods as mentioned in Section 4.1; however, the official reported from ProtoNet uses 30-way for one shot and 20-way for five shot in the meta-training stage in spite of using 5-way in the meta-testing stage. We report this for completeness. From Table 1, we can observe that all of our re-implementation for meta-learning methods do not fall more than 2% behind reported performance. These minor differences can be attributed to our Table 1: Validating our re-implementation. We validate our few-shot classification implementation on the mini-ImageNet dataset using a Conv-4 backbone. We report the mean of 600 randomly generated test episodes as well as the 95% confidence intervals. Our reproduced to all few-shot methods do not fall behind by more than 2% to the reported in the literature. We attribute the slight discrepancy to different random seeds and minor implementation differences in each method. "Baseline * " denotes the without applying data augmentation during training. ProtoNet # indicates performing 30-way classification in 1-shot and 20-way in 5-shot during the meta-training stage. modifications of some implementation details to ensure a fair comparison among all methods, such as using the same optimizer for all methods. Moreover, our implementation of existing work also improves the performance of some of the methods. For example, our show that the Baseline approach under 5-shot setting can be improved by a large margin since previous implementations of the Baseline do not include data augmentation in their training stage, thereby leads to over-fitting. While our Baseline * is not as good as reported in 1-shot, our Baseline with augmentation still improves on it, and could be even higher if our reproduced Baseline * matches the reported statistics. In either case, the performance of the Baseline method is severely underestimated. We also improve the of MatchingNet by adjusting the input score to the softmax layer to a more appropriate range as stated in Section 4.1. On the other hand, while ProtoNet # is not as good as ProtoNet, as mentioned in the original paper a more challenging setting in the meta-training stage leads to better accuracy. We choose to use a consistent 5-way classification setting in subsequent experiments to have a fair comparison to other methods. This issue can be resolved by using a deeper backbone as shown in Section 4.3.After validating our re-implementation, we now report the accuracy in TAB1. Besides additionally reporting on the CUB dataset, we also compare Baseline++ to other methods. Here, we find that Baseline++ improves the Baseline by a large margin and becomes competitive even when compared with other meta-learning methods. The demonstrate that reducing intra-class variation is an important factor in the current few-shot classification problem setting. FIG2 and TAB9 for larger figure and detailed statistics.)However, note that our current setting only uses a 4-layer backbone, while a deeper backbone can inherently reduce intra-class variation. Thus, we conduct experiments to investigate the effects of backbone depth in the next section. In this section, we change the depth of the feature backbone to reduce intra-class variation for all methods. See appendix for statistics on how network depth correlates with intra-class variation. Starting from Conv-4, we gradually increase the feature backbone to Conv-6, ResNet-10, 18 and 34, where Conv-6 have two additional convolution blocks without pooling after Conv-4. ResNet-18 and 34 are the same as described in BID12 with an input size of 224×224, while ResNet-10 is a simplified version of ResNet-18 where only one residual building block is used in each layer. The statistics of this experiment would also be helpful to other works to make a fair comparison under different feature backbones. Results of the CUB dataset shows a clearer tendency in FIG2. As the backbone gets deeper, the gap among different methods drastically reduces. Another observation is how ProtoNet improves rapidly as the backbone gets deeper. While using a consistent 5-way classification as discussed in Section 4.2 degrades the accuracy of ProtoNet with Conv-4, it works well with a deeper backbone. Thus, the two observations above demonstrate that in the CUB dataset, the gap among existing methods would be reduced if their intra-class variation are all reduced by a deeper backbone. However, the of mini-ImageNet in FIG2 is much more complicated. In the 5-shot setting, both Baseline and Baseline++ achieve good performance with a deeper backbone, but some metalearning methods become worse relative to them. Thus, other than intra-class variation, we can assume that the dataset is also important in few-shot classification. One difference between CUB and mini-ImageNet is their domain difference in base and novel classes since classes in mini-ImageNet have a larger divergence than CUB in a word-net hierarchy BID19. To better understand the effect, below we discuss how domain differences between base and novel classes impact few-shot classification . To further dig into the issue of domain difference, we design scenarios that provide such domain shifts. Besides the fine-grained classification and object recognition scenarios, we propose a new cross-domain scenario: mini-ImageNet →CUB as mentioned in Section 4.1. We believe that this is practical scenario since collecting images from a general class may be relatively easy (e.g. due to increased availability) but collecting images from fine-grained classes might be more difficult. We conduct the experiments with a ResNet-18 feature backbone. As shown in TAB4, the Baseline outperforms all meta-learning methods under this scenario. While meta-learning methods learn to learn from the support set during the meta-training stage, they are not able to adapt to novel classes that are too different since all of the base support sets are within the same dataset. A similar concept is also mentioned in BID29. In contrast, the Baseline simply replaces and trains a new classifier based on the few given novel class data, which allows it to quickly adapt to a novel class and is less affected by domain shift between the source and target domains. The Baseline also performs better than the Baseline++ method, possibly because additionally reducing intra-class variation compromises adaptability. In Figure 4, we can further observe how Baseline accuracy becomes relatively higher as the domain difference gets larger. That is, as the domain difference grows larger, the adaptation based on a few novel class instances becomes more important. To further adapt meta-learning methods as in the Baseline method, an intuitive way is to fix the features and train a new softmax classifier. We apply this simple adaptation scheme to MatchingNet and ProtoNet. For MAML, it is not feasible to fix the feature as it is an initialization method. In contrast, since it updates the model with the support set for only a few iterations, we can adapt further by updating for as many iterations as is required to train a new classification layer, which is 100 updates as mentioned in Section 4.1. For RelationNet, the features are convolution maps rather than the feature vectors, so we are not able to replace it with a softmax. As an alternative, we randomly split the few training data in novel class into 3 support and 2 query data to finetune the relation module for 100 epochs. The of further adaptation are shown in FIG3; we can observe that the performance of MatchingNet and MAML improves significantly after further adaptation, particularly in the miniImageNet →CUB scenario. The demonstrate that lack of adaptation is the reason they fall behind the Baseline. However, changing the setting in the meta-testing stage can lead to inconsistency with the meta-training stage. The ProtoNet shows that performance can degrade in sce-narios with less domain difference. Thus, we believe that learning how to adapt in the meta-training stage is important future direction. In summary, as domain differences are likely to exist in many real-world applications, we consider that learning to learn adaptation in the meta-training stage would be an important direction for future meta-learning research in few-shot classification. In this paper, we have investigated the limits of the standard evaluation setting for few-shot classification. Through comparing methods on a common ground, our show that the Baseline++ model is competitive to state of art under standard conditions, and the Baseline model achieves competitive performance with recent state-of-the-art meta-learning algorithms on both CUB and mini-ImageNet benchmark datasets when using a deeper feature backbone. Surprisingly, the Baseline compares favorably against all the evaluated meta-learning algorithms under a realistic scenario where there exists domain shift between the base and novel classes. By making our source code publicly available, we believe that community can benefit from the consistent comparative experiments and move forward to tackle the challenge of potential domain shifts in the context of few-shot learning. As mentioned in Section 2, here we discuss the relationship between domain adaptation and fewshot classification to clarify different experimental settings. As shown in Table A1, in general, domain adaptation aims at adapting source dataset knowledge to the same class in target dataset. On the other hand, the goal of few-shot classification is to learn from base classes to classify novel classes in the same dataset. Several recent work tackle the problem at the intersection of the two fields of study. For example, cross-task domain adaptation BID14 also discuss novel classes in the target dataset. In contrast, while BID20 has "few-shot" in the title, their evaluation setting focuses on classifying the same class in the target dataset. If base and novel classes are both drawn from the same dataset, minor domain shift exists between the base and novel classes, as we demonstrated in Section 4.4. To highlight the impact of domain shift, we further propose the mini-ImageNet →CUB setting. The domain shift in few-shot classification is also discussed in BID5. Different meta-learning works use different terminology in their works. We highlight their differences in appendix TAB1 to clarify the inconsistency. For character recognition, we use the Omniglot dataset BID17 commonly used in evaluating few-shot classification algorithms. Omniglot contains 1,623 characters from 50 languages, and we follow the evaluation protocol of BID29 to first augment the classes by rotations in 90, 180, 270 degrees, ing in 6492 classes. We then follow BID27 to split these classes into 4112 base, 688 validation, and 1692 novel classes. Unlike BID27, our validation classes are only used to monitor the performance during meta-training. For cross-domain character recognition (Omniglot→EMNIST), we follow the setting of BID5 to use Omniglot without Latin characters and without rotation augmentation as base classes, so there are 1597 base classes. On the other hand, EMNIST dataset BID2 contains 10-digits and upper and lower case alphabets in English, so there are 62 classes in total. We split these classes into 31 validation and 31 novel classes, and invert the white-on-black characters to black-on-white as in Omniglot. We use a Conv-4 backbone with input size 28x28 for both settings. As Omniglot characters are black-and-white, center-aligned and rotation sensitive, we do not use data augmentation in this experiment. To reduce the risk of over-fitting, we use the validation set to select the epoch or episode with the best accuracy for all methods, including baseline and baseline++. 4 As shown in TAB4, in both Omniglot and Omniglot→EMNIST settings, meta-learning methods outperform baseline and baseline++ in 1-shot. However, all methods reach comparable performance in the 5-shot classification setting. We attribute this to the lack of data augmentation for the baseline and baseline++ methods as they tend to over-fit base classes. When sufficient examples in novel classes are available, the negative impact of over-fitting is reduced. BID29 ) apply a Baseline with 1-NN classifier in the test stage. We include our as in TAB8. The shows that using 1-NN classifier has better performance than that of using the softmax classifier in 1-shot setting, but softmax classifier performs better in 5-shot setting. We note that the number here are not directly comparable to in BID29 because we use a different mini-ImageNet as in BID25. FIG0. We observe that while the full version MAML converge faster, both versions reach similar accuracy in the end. This phenomena is consistent with the difference of first-order (e.g. gradient descent) and secondorder methods (e.g. Newton) in convex optimization problems. Second-order methods converge faster at the cost of memory, but they both converge to similar objective value. As mentioned in Section 4.3, here we demonstrate decreased intra-class variation as the network depth gets deeper as in FIG1. We use the Davies-Bouldin index BID3 to measure intra-class variation. The Davies-Bouldin index is a metric to evaluate the tightness in a cluster (or class, in our case). Our show that both intra-class variation in the base and novel class feature decrease using deeper backbones. Here we use Davies-Bouldin index to represent intra-class variation, which is a metric to evaluate the tightness in a cluster (or class, in our case). The statistics are Davies-Bouldin index for all base and novel class feature (extracted by feature extractor learned after training or meta-training stage) for CUB dataset under different backbone. Here we show a high-resolution version of FIG2 in FIG2 and show detailed statistics in TAB9 for easier comparison. We experiment with a practical setting that handles different testing scenarios. Specifically, we conduct the experiments of 5-way meta-training and N-way meta-testing (where N = 5, 10, 20) to examine the effect of testing scenarios that are different from training. As in Table A6, we compare the methods Baseline, Baseline++, MatchingNet, ProtoNet, and RelationNet. Note that we are unable to apply the MAML method as MAML learns the initialization for the classifier and can thus only be updated to classify the same number of classes. Our show that for classification with a larger N-way in the meta-testing stage, the proposed Baseline++ compares favorably against other methods in both shallow or deeper backbone settings. We attribute the to two reasons. First, to perform well in a larger N-way classification setting, one needs to further reduce the intra-class variation to avoid misclassification. Thus, Baseline++ has better performance than Baseline in both backbone settings. Second, as meta-learning algorithms were trained to perform 5-way classification in the meta-training stage, the performance of these algorithms may drop significantly when increasing the N-way in the meta-testing stage because the tasks of 10-way or 20-way classification are harder than that of 5-way one. One may address this issue by performing a larger N-way classification in the meta-training stage (as suggested in BID27). However, it may encounter the issue of memory constraint. For example, to perform a 20-way classification with 5 support images and 15 query images in each class, we need to fit a batch size of 400 (20 x (5 + 15)) that must fit into the GPUs. Without special hardware parallelization, the large batch size may prevent us from training models with deeper backbones such as ResNet. Table A6: 5-way meta-training and N-way meta-testing experiment. The experimental are on mini-ImageNet with 5-shot. We could see Baseline++ compares favorably against other methods in both shallow or deeper backbone settings. | A detailed empirical study in few-shot classification that revealing challenges in standard evaluation setting and showing a new direction. | 650 | scitldr |
Temporal logics are useful for describing dynamic system behavior, and have been successfully used as a language for goal definitions during task planning. Prior works on inferring temporal logic specifications have focused on "summarizing" the input dataset -- i.e., finding specifications that are satisfied by all plan traces belonging to the given set. In this paper, we examine the problem of inferring specifications that describe temporal differences between two sets of plan traces. We formalize the concept of providing such contrastive explanations, then present a Bayesian probabilistic model for inferring contrastive explanations as linear temporal logic specifications. We demonstrate the efficacy, scalability, and robustness of our model for inferring correct specifications across various benchmark planning domains and for a simulated air combat mission. In a meeting where multiple plan options are under deliberation by a team, it would be helpful for that team's resolution process if someone could intuitively explain how the plans under consideration differ from one another. Also, given a need to identify differences in execution behavior between distinct groups of users (e.g., a group of users who successfully completed a task using a particular system versus those who did not), explanations that identify distinguishing patterns between group behaviors can yield valuable analytics and insights toward iterative system refinement. In this paper, we seek to generate explanations for how two sets of divergent plans differ. We focus on generating such contrastive explanations by discovering specifications satisfied by one set of plans, but not the other. Prior works on plan explanations include those related to plan recognition for inferring latent goals through observations BID25 BID35, works on system diagnosis and excuse generation in order to explain plan failures BID29 BID10, and those focused on synthesizing "explicable" plans -i.e., plans that are self-explanatory with respect to a human's mental model BID16. The aforementioned works, however, only involve the explanation or generation of a single plan; we instead focus on explaining differences between multiple plans, which can be helpful in various applications, such as the analysis of competing systems and compliance models, and detecting anomalous behaviour of users. A specification language should be used in order to achieve clear and effective plan explanations. Prior works have considered surface-level metrics such as plan cost and action (or causal link) similarity measures to describe plan differences BID23 BID3. In this work, we leverage linear temporal logic (LTL) BID24 which is an expressive language for capturing temporal relations of state variables. We use a plan's individual satisfaction (or dissatisfaction) of LTL specifications to describe their differences. LTL specifications have been widely used in both industrial systems and planning algorithms to compactly describe temporal properties BID32. They are human interpretable when expressed as compositions of predefined templates; inversely, they can be constructed from natural language descriptions BID7 ) and serve as natural patterns when encoding high-level human strategies for planning constraints BID14.Although a suite of LTL miners have been developed for software engineering and verification purposes BID32 BID17 BID28, they primarily focus on mining properties that summarize the overall behavior on a single set of plan traces. Recently, BID22 presented SAT-based algorithms to construct a LTL specification that asserts contrast between two sets of traces. The algorithms, however, are designed to output only a single explanation, and are susceptible to failure when the input contains imperfect traces. Similar to Neider and Gavran, our problem focuses on mining contrastive explanations between two sets of traces, but we adopt a probabilistic approach -we present a Bayesian inference model that can generate multiple explanations while demonstrating robustness to noisy input. The model also permits scalability when searching in large hypothesis spaces and allows for flexibility in incorporating various forms of prior knowledge and system designer preferences. We demonstrate the efficacy of our model for extracting correct explanations on plan traces across various benchmark planning domains and for a simulated air combat mission. Plan explanations are becoming increasingly important as automated planners and humans collaborate. This first involves humans making sense of the planner's output (e.g., PDDL plans), where prior work has focused on developing user-friendly interfaces that provide graphical visualizations to describe the causal links and temporal relations of plan steps BID1 BID26 BID21. The outputs of these systems, however, require an expert for interpretation and do not provide a direct explanation as to why the planner made certain decisions to realize the outputted plan. Automatic generation of explanations has been studied in goal recognition settings, where the objective is to infer the latent goal state that best explains the incomplete sequence of observations BID25 BID30. Works on explicable planning emphasize the generation of plans that are deemed selfexplanatory, defined in terms of optimizing plan costs for a human's mental model of the world BID16. Mixed-initiative planners iteratively revise their plan generation based on user input (e.g. action modifications), indirectly promoting an understanding of differences across newly generated plans through continual user interaction BID27 BID3. All aforementioned works deal with explainability with respect to a single planning problem specification, whereas our model deals with explaining differences in specifications governing two distinct sets of plans given as input. Works on model reconciliation focus on producing explanations for planning models (i.e. predicates, preconditions and effects), instead of the realized plans. Explanations are specified in the form of model updates, iteratively bringing an incomplete model to a more complete world model. The term, "contrastive explanation," is used in these works to identify the relevant differences between the input pair of models. Our work is similar in spirit but focuses on producing a specification of differences in the constraints satisfied among realized plans. Our approach takes sets of observed plans as input rather than planning models. While model updates are an important modality for providing plan explanations, there are certain limitations. We note that an optimal plan generated with respect to a complete environment/world model is not always explicable or self-explanatory. The space of optimal plans may be large, and the underlying preference or constraint that drives the generation of a particular plan may be difficult to pre-specify and incorporate within the planning model representation. We focus on explanations stemming directly from the realized plans themselves. Environment/world models (e.g. PDDL domain files) can be helpful in providing additional context, but are not necessary for our approach. Our work leverages LTL as an explanation language. Temporal patterns can offer greater expressivity and explanatory power in describing why a set of plans occurred and how they differ, and may reveal hidden plan dynamics that cannot be captured by the use of surface-level metrics like plan cost or action similarities. Our work on using LTL for contrastive explanations directly contributes to exploring how we can answer the following roadmap questions for XAIP BID9: "why did you do that? why didn't you do something else (that I would have done)?" Prior research into mining LTL specifications has focused on generating a "summary" explanation of the observed traces. BID12 explored mining globally persistent specifications from demonstrated action traces for a finite state Markov decision process. BID17 introduced Texada, a system for mining all possible instances of a given LTL template from an output log where each unique string is represented as a new proposition. BID28 proposed a template-based probabilistic model to infer task specifications given a set of demonstrations. However, all of these approaches focus on inferring a specification that all the demonstrated traces satisfy. For contrastive explanations, presented SAT-based algorithms to infer a LTL specification that delineates between the positive and negative sets of traces. Unlike existing LTL miners, the algorithms construct an arbitrary, minimal LTL specification without requiring predefined templates. However, they are designed to output only a single specification, and can fail when the sets contain imperfect traces (i.e., if there exists no specification consistent with every single input trace.). We present a probabilistic model for the same problem and generate multiple contrastive explanations while offering robustness to noisy input. Some works have proposed algorithms to infer contrastive explanations for continuous valued time-series data based on restricted signal temporal logic (STL) grammar BID33 BID15. However, the continuous space semantics of STL and a restricted subset of temporal operators make the grammar unsuitable for use with planning domain problems. To the best of our knowledge, our proposed model is the first probabilistic model to infer contrastive LTL specifications for sets of traces in domains defined by PDDL. Linear Temporal Logic (LTL) provides an expressive grammar for describing temporal behavior BID24 ). An LTL specification ϕ is constructed from a set of propositions V, the standard Boolean operators, and a set of temporal operators. Its truth value is determined with respect to a trace, π, which is an infinite or finite sequence of truth assignments for all propositions in V. The notation π, t |= ϕ indicates that ϕ holds at time t. The trace π satisfies ϕ (denoted by π |= ϕ) iff π, 0 |= ϕ. The minimal syntax for LTL can be described as follows: DISPLAYFORM0 where p is a proposition, and ϕ 1 and ϕ 2 are valid LTL specifications. DISPLAYFORM1 Only one contiguous interval exists where pi is true. DISPLAYFORM2 If pi occurs, pj occurred in the past. Table 1: An example set of LTL templates. n T corresponds to the number of free propositions for each template. X reads as "next" where Xϕ evaluates as true at t if ϕ holds in the next time step t + 1. U reads as "until" where ϕ 1 Uϕ 2 evaluates as true at time step t if ϕ 1 is true at that time and going forward, until a time step is reached where ϕ 2 becomes true. In addition to the minimal syntax, we also use higher-order temporal operators, F (eventually), G (global), and R (release). Fϕ holds true at t if ϕ holds for some time step ≥ t. Gϕ holds true at t if ϕ holds for all time steps ≥ t. ϕ 1 Rϕ 2 holds true at time step t if either there exists a time step t 1 ≥ t such that ϕ 2 holds true until t 1 where both ϕ 1 and ϕ 2 hold true simultaneously, or no such t 1 exists and ϕ 2 holds true for all time steps ≥ t. Interpretable sets of LTL templates have been defined and successfully integrated for a variety of software verification systems BID32 BID20. Some of the widely used templates are shown in Table 1. According to BID8, a contrastive explanation describes "why event A occurred as opposed to some alternative event B." In our problem, events A and B represent two sets of plan traces (can be seen as traces generated from different systems or different group behavior). The form of why may be expressed in various ways BID18 ); our choice is to define it according to the plans' satisfaction of a constraint. Then, formally: Definition 3.1. A contrastive explanation is a constraint ϕ that it is satisfied by one set of plan traces (positive set, π A), but not by the other (negative set, π B).The constraint ϕ can be seen as a classifier trying to separate the provided positive and negative traces. Its performance measure corresponds to standard classification accuracy, computed by counting the number of traces in π A that satisfy ϕ and, conversely, the number of traces in π B where ϕ is unsatisfied. Formally, accuracy of ϕ is: DISPLAYFORM0 Accuracy is 1 for a perfect contrastive explanation, and approaches zero if both sets contains no valid trace with respect to ϕ (i.e., all traces in π A dissatisfy ϕ and all traces in π B satisfy ϕ). The input to the problem is a pair of sets of traces (π A, π B). Each π i ∈ π is a trace on the set of propositions V (we refer to V as the vocabulary). The output is a set of specifications, {ϕ}, where each ϕ achieves perfect or near-perfect contrastive explanation. This is an unsupervised classification problem. We use LTL specifications for the choice of ϕ. Planning is sequential, and so temporal patterns can offer greater expressivity and explanatory power for identifying plan differences rather than static facts. We utilize a set of interpretable LTL templates, such as those shown in Table 1.A LTL template T is instantiated with a selection of n T propositions denoted by p ∈ V n T. The candidate formula ϕ is then composed as a conjunction of multiple instantiations of a template T based on a set of selections {p} ⊆ V n T. For example, an instantiation of T ="stability" with p = [apple] is written as FG(apple). If the selected subset of DISPLAYFORM0, asserting the stability condition for all three propositions. Conjunctions provide powerful semantics with the ability to capture a notion of quantification. Formally, our LTL specification is written as follows: DISPLAYFORM1 Note that the number of free propositions, n T, varies per LTL template. The number of possible specifications for a given LTL template T is 2 |V | n T. Instead of extracting specifications narrowed down to a single template query, our hypothesis space Φ is set to include a number of predefined templates, T 1, T 2,...T k. With k representing the number of possible templates, the full hypothesis space of Φ grows with O(k · 2 |V | n T). Employing brute force enumeration to find {ϕ} that achieves the contrastive explanation criterion rapidly becomes intractable with increasing vocabulary size. We model specification learning as a Bayesian inference problem, building on the fundamental Bayes theorem: DISPLAYFORM0 Our goal is to infer ϕ * = argmax Φ P (ϕ|X). P (ϕ) represents the prior distribution over the hypothesis space, and Figure 1: A graphical model of the generative distribution. ϕ represents the latent LTL specification that we seek to infer given the evidence X (in our case, the traces).P (X | ϕ) is the likelihood of observing the evidence X = (π A, π B) given ϕ. We adopt a probabilistic generative modeling approach that has been used extensively in topic modeling BID2. Below, we describe each component of our generative model, depicted in Figure 1.Prior Function ϕ is generated by choosing a LTL template, T, the number of conjunctions, N, and then the proposition instantiations, p for each conjunct. The generative process for each of those components is as follows: DISPLAYFORM1 T is generated with respect to a categorical distribution with weights w T ∈ R k over the k possible LTL templates. w T is a hyperparameter that the designer can set to assert preferences for mining certain types of templates over others (e.g., preferring templates with "global" operators than "until" operators).The number of conjunctions, N = |{p}|, is generated using a geometric distribution with a decay rate of λ. Thus, the the probability of ϕ is reduced by λ for each addition of a conjunct, incentivizing low-complexity specifications defined in terms of having a fewer number of conjunctions (which also implies fewer total propositions). This promotes conciseness and prevents over-fitting to the input traces (i.e., to avoid restating the input as a long, convoluted LTL formula).Similar to the method used for template selection, we use a separate categorical distribution for selecting propositions p for each conjunct in ϕ. Propositions are generated with respect to the probability weights, w p ∈ R |V |, defined for all p in V. The designer can likewise control w p to favor specifications instantiated with certain types of propositions over others. w p may be interpreted as the level of saliency of propositions for an application. (For example, propositions that are landmarks for planning problems BID11, or a part of the causal links set BID31, may be deemed more important to express in plan explanations than other auxiliary state variables.) Several forms of variable importance, corresponding to the saliency of that importance in an explanation, may be applied to set w p. This opens the door to hypothesizing which propositions are most salient for a given domain, and generating explanations restricted to those propositions exclusively. The full prior function, P (ϕ), is evaluated as follows: DISPLAYFORM2 The derivation follows from the definition that T, N, {p} completely describe ϕ (i.e. P (ϕ | T, N, {p}) = 1), and the assumption that the three probability distributions are independent of each other. P (T) and P (N) are calculated using categorical and geometric distributions outlined in Equations 5 and 6, respectively. P ({p}) denotes the probability of the full set of proposition instantiations (over all conjuncts); it is calculated by the average categorical weight, w p, over all propositions. Formally: Likelihood Function The likelihood function P (X | ϕ) is the probability of observing the input sets of traces in the satisfying set π A and the non-satisfying set π B given the contrastive specification. The traces in π A and π B are generated by different solutions to the planning problem that satisfy the problem specification. As the problem specification is the only input needed to generate a set of plans, we assume that the individual traces are conditionally independent of each other, given the planning problem specification. With the conditional independence assumption, the likelihood can then be factored as follows: DISPLAYFORM3 DISPLAYFORM4 LTL satisfaction checks are conducted over all traces belonging to sets π A and π B; P (π i |ϕ) is set equal to 1 − α if π i |= ϕ, and α otherwise. Conversely, P (π j |ϕ) is set equal to 1−β if π j ϕ, and β otherwise. α and β permit non-zero probability to traces not adhering to the constrastive explanation criterion, thereby providing robustness to noisy traces and outliers. α and β may be set to different values to reflect the relative importance of the positive and negative sets (e.g., may be used to counteract imbalanced sets).In order to perform LTL satisfaction checks on a trace, we follow the method developed by BID17, in which ϕ is represented as a tree and each temporal operator is recursively evaluated according to its semantics. Since sub-trees of two different ϕ may be identical, we memoize and re-use evaluation to significantly speed up LTL satisfaction checks. Proposal Function Exact inference methods to find maximum a posterior (MAP) estimates, {ϕ *}, are intractable. Thus we implement a Markov Chain Monte Carlo method, specifically the Metropolis-Hasting (MH) algorithm BID6, to iteratively draw samples whose collection approximates the true posterior distribution. MH sampling requires a user-defined proposal function F (ϕ |ϕ) that samples a new candidate ϕ given the current ϕ. Our F behaves similar to an -greedy search, utilizing a drift kernel (i.e. a random walk) with a probability of 1-or sampling from the prior distribution (i.e. a restart) with a probability of. The drift kernel operates by performing one of the following moves on the current candidate LTL ϕ:• Remain within the current template T, add a new conjunct, and instantiate that conjunct with a randomly sampled p that is currently not in ϕ. The probability associated with this move, Q add, is equal to 1/(|V n T | − N).• Remain within the current template T and randomly remove one of the existing conjuncts. The probability associated with this move, Q remove, is equal to 1/N. The selection between these two moves is conducted uniformly, though there is no issue with allowing the designer to weight one more likely than the other. Note that the drift kernel perturbs ϕ, but stays within the current template. ϕ transitions to a new template (probabilistically) when choosing to sample from the prior. The probability distribution associated with F, denoted by Q(ϕ |ϕ), is then outlined as follows: DISPLAYFORM5, sample prior function Our proposal function F fulfills the ergodicity condition of the Markov process (the transition from any ϕ to ϕ is aperiodic and occurs within a finite number of steps), thus asymptotically guarantees the sampling process from the true posterior distribution. A new sample ϕ is accepted at every MH iteration with the following probability: DISPLAYFORM6 The set of accepted samples approximates the true posterior, and the MAP estimates (the output {ϕ *}) are determined from the relative frequencies of accepted samples. We evaluated the effectiveness of our model for inferring contrastive explanations from sets of traces generated from a number of International Planning Competition (IPC) planning domains BID19. The plan traces in π A were generated by first injecting the ground truth ϕ ground into the original PDDL domain and problem files, enforcing valid plans on the modified domain/problem files to satisfy ϕ ground. The LTL injection to create modified planning files was performed using the LTLFOND2FOND tool BID4. Second, a state-of-the-art top-k planner 1 BID13 ) was used to produce a set of distinct, valid plans and their accompanying state execution traces. Similarly, the above steps were repeated to generate execution traces for π B, wherein the negation of the ground truth specification, ¬ϕ ground, was injected to the planning files, and then a set of traces was collected. Such a setup guarantees the existence of contrastive explanation solutions on (π A, π B), which includes (but is not limited to) ϕ ground. We collected twenty traces for each set. We evaluated our model using six different IPC benchmark domains, containing problems related to mission planning, vehicle routing, and resource allocation. For each of these domains, we tested three different problem instances of increasing vocabulary size, and on twenty randomly generated ϕ ground specifications for each problem instance. For each test case, ϕ ground was randomly generated using one of the seven LTL templates listed in Table 1; thus the hypothesis space Φ was set to include all possible specifications over the predefined templates. The categorical distribution weights, w T and w p, were set to be uniform. Other hyperparameters were set as follows: α = β = 0.01, to put equal importance of positive and negative sets, λ = 0.7 to penalize ϕ for every additional conjunct, and = 0.2 to apply -greedy search in the the proposal function. We ran the MH sampler with num M H = 2, 000 iterations with the first 300 used as a burn-in period. These hyperparameters were set apriori, similar to how a wide range of probablistic graphical models are designed. However, our experimental were found to be robust to the various settings of these parameters. We evaluated our model against the SAT-based miner developed by BID22, the state-of-the-art for extracting contrastive LTL specifications. We also evaluated our model against brute force enumeration, a common approach employed by existing LTL miners used for summarization BID32 BID17. Because enumerating through full space of Φ would in a time out, we tested delimited enumeration with only a random subset of brute force samples. This baseline selects a random subset of size num brute from Φ. Then, a function proportional to the posterior distribution (numerator in Equation 4) is evaluated for each of the samples to determine {ϕ *}. num brute was set equal to num M H to enable a fair baseline in terms of having the same amount of allotted computation. TAB2 shows the inference on the tested domains and on problem instances of varying complexity (reflected by an increase in |V |). For evaluation, we measured M = |{ϕ *}|, the number of unique contrastive explanations extracted by the different approaches, along with the explanations' accuracy. Each domain-problem combination row shows the average statistics over twenty ϕ ground test cases. High M and high accuracy across all domain-problem combinations demonstrate how our probabilistic model was able to generate multiple, near-perfect contrastive explanations. The solution set {ϕ *} almost always included ϕ ground. Our model outperformed the baseline and the stateof-the-art miner by producing more contrastive explanations within an allotted amount of computation / runtime. The runtime for our model and the delimited enumeration baseline with 2,000 samples ranged between 1.2-4.7 seconds (increase in |V | only had marginal effect on the runtime). The SAT-based miner by Neider and Gavran often failed to generate a solution within a five minute cutoff (see the number of its timeout cases in the last column of TAB2). The prior work can only output a single ϕ *, which frequently took on a form of Fp i. It did not scale well to problems that required more complex ϕ as solutions. This is because increasing the "depth" of ϕ (the number of temporal / Boolean operators and propositions) exponentially increased the size of the compiled SAT problem. In our experiments, the prior work timed out for problems requiring solutions with depth ≥ 3 (note that Fp i has depth of 2). Robustness to Noisy Input In order to test robustness, we perturbed the input X by randomly swapping traces between π A and π B. For example, a noise rate of 0.2 would swap 20% of the traces, where the accuracy of ϕ ground on the perturbed data, X = (π A, π B), would evaluate to 0.8 (note that it may be possible to discover other ϕ that achieve better accuracy on X). The MAP estimates inferred from X, {ϕ *}, were evaluated on the original input X to assess any loss of ability to provide contrast. Figure 3 shows the average accuracy of {ϕ *}, evaluated on both X and X, across varying noise rate. Even at a moderate noise rate of 0.25, the inferred ϕ * s were able to maintain an average accuracy greater than 0.9 on X. Such a threshold is promising for real-world applications. The robustness did start to sharply decline as noise rate increased past 0.4. For all test cases, the Neider and Gavran miner failed to generate a solution for anything with a noise rate ≥ 0.1. Large values of M signify how there are often various ways to express how plan traces differ using the LTL semantics. Some LTL specifications are logically dependent. For example, the global template subsumes both the stability and the eventuality template. LTL specifications may also be related through substitutions of propositions. For example, on Figure 3: The accuracy of ϕ * with respect to increasing noise rate. ϕ * is inferred from the perturbed, noisy data and then is evaluated (generalized) on the original input X. Each domain subplot shows the averages across all three problem instances and all twenty ϕ ground test cases. 95% confidence intervals are displayed.problems where holding a block is a prerequisite to placing it onto a table, ϕ 1 = F(holding A) ∧ F(holding B) will be satisfied in concert with the satisfaction of ϕ 2 = F(ontable A)∧F(ontable B). For contrastive explanation, however, one needs to be mindful of both positive and negative sides of satisfaction which affect the accuracy. Relations like template subsumptions or precondition / effect pairs should not be simply favored during search without understanding that the converse may not hold and may in worse accuracy. For a contrastive ϕ, it is possible to create a new contrastive ϕ that includes stationary propositions or tautologies specific to the planning problem. For example, if ϕ 1 = F(holding A) ∧ F(holding B) is a contrastive explanation, so is ϕ 3 = F(holding A) ∧ F(holding B) ∧ F(earth is round). Our posterior distribution assigns a lower probability to ϕ 3 than ϕ 1 based on the decay rate on the number of conjunctions. Also, tautologies by themselves cannot be contrastive explanations, because they can never be dissatisfied. The output of our model appropriately excluded such vacuous explanations. TAB2 shows how M generally increased as |V | increased. This opens up interesting research avenues for determining a minimal set of {ϕ}. Assessing logical dependence or metric space between two arbitrary LTL specifications, however, is non-trivial. Evaluation on Real-world Inspired Domain We applied our inference model on a large force exercise (LFE) domain, which simulate air-combat games used to train pilots. Through the use of Joint Semi-Automated Forces environment BID0, realistic aircraft behavior and their state execution traces were collected for the mission objective of "gain and maintain air superiority." A total of 24 instances (i.e. traces) of LFEs were separated into positive and negative sets by a subject matter expert. The detail of the input was as follows: |π A |=16, |π B |=8, |V |=15, and the average length of traces involved 11 time steps. Within a second (2,000 samples), our model generated ten unique contrastive explanations, all with accuracy of 0.96. ϕ * 1 = G(attrition < 0.25) ∧ G(striker not shot) represents how friendly attrition rate should be always less than 25% and that the striker aircraft should never be shot upon. ϕ * 2 = (attrition < 0.25) U (weapon release) asserts how friendly attrition rate has to be less than 25% before releasing the weapon. The model also inferred rules of the environment, for example, asserting that propositions (attrition < 0.75) and (attrition < 0.50) precede (attrition < 0.25) (which makes sense because attrition can only increase throughout the mission). After discussion with the expert, we discovered that the model could not generate the perfect contrastive ϕ ground, because it required having multiple conjuncts that incorporate different LTL templates (which is not part of our defined solution space). Nevertheless, the generated explanations were consistent with the expert's interpretation of achieving the mission objective of air superiority. We have presented a probabilistic Bayesian model to infer contrastive LTL specifications describing how two sets of plan traces differ. Our model generates multiple contrastive explanations more efficiently than the state-of-the-art and demonstrates robustness to noisy input. It also provides a principled approach to incorporate various forms of prior knowledge or preferences during search. It can serve as a strong foundation that can be naturally extended to multiple input sets by repeating the algorithm for all pairwise or one-vs.-rest comparisons. Interesting avenues for future work include gauging the saliency of propositions, as well as deriving a minimal set of contrastive explanations. Furthermore, we seek to test the model in human-in-the-loop settings, with the goal of understanding the relationship between different planning heuristics for the saliency of propositions (e.g. landmarks and causal links) to their actual explicability when the explanation is communicated to a human. | We present a Bayesian inference model to infer contrastive explanations (as LTL specifications) describing how two sets of plan traces differ. | 651 | scitldr |
This work tackles the problem of characterizing and understanding the decision boundaries of neural networks with piece-wise linear non-linearity activations. We use tropical geometry, a new development in the area of algebraic geometry, to provide a characterization of the decision boundaries of a simple neural network of the form (Affine, ReLU, Affine). Specifically, we show that the decision boundaries are a subset of a tropical hypersurface, which is intimately related to a polytope formed by the convex hull of two zonotopes. The generators of the zonotopes are precise functions of the neural network parameters. We utilize this geometric characterization to shed light and new perspective on three tasks. In doing so, we propose a new tropical perspective for the lottery ticket hypothesis, where we see the effect of different initializations on the tropical geometric representation of the decision boundaries. Also, we leverage this characterization as a new set of tropical regularizers, which deal directly with the decision boundaries of a network. We investigate the use of these regularizers in neural network pruning (removing network parameters that do not contribute to the tropical geometric representation of the decision boundaries) and in generating adversarial input attacks (with input perturbations explicitly perturbing the decision boundaries geometry to change the network prediction of the input).. In addition, and in attempt to understand some of the subtle behaviours DNNs exhibit, e.g. the sensitive reaction of DNNs to small input perturbations, several works directly investigated the decision boundaries induced by a DNN used for classification. The work of showed that the smoothness of these decision boundaries and their curvature can play a vital role in network robustness. Moreover, He et al. (2018a) studied the expressiveness of these decision boundaries at perturbed inputs and showed that these boundaries do not resemble the boundaries around benign inputs. showed that under certain assumptions, the decision boundaries of the last fully connected layer of DNNs will converge to a linear SVM. Also, showed that the decision regions of DNNs with width smaller than the input dimension are unbounded. More recently, and due to the popularity of the piecewise linear ReLU as an activation function, there has been a surge in the number of works that study this class of DNNs in particular. x ⊕ y = max{x, y}, x y = x + y, ∀x, y ∈ T. It can be readily shown that −∞ is the additive identity and 0 is the multiplicative identity. Given the previous definition, a tropical power can be formulated as x a = x x · · · x = a.x, for x ∈ T, a ∈ N, where a.x is standard multiplication. Moreover, the tropical quotient can be defined as: x y = x − y where x − y is the standard subtraction. For ease of notation, we write x a as x a. Now, we are in a position to define tropical polynomials, their solution sets and tropical rationals. f (x) = (c 1 x a1) ⊕ (c 2 x a2) ⊕ · · · ⊕ (c n x an), ∀ a i = a j when i = j. We use the more compact vector notation x a = x In this section, we analyze the decision boundaries of a network in the form (Affine, ReLU, Affine) using tropical geometry. For ease, we use ReLUs as the non-linear activation, but any other piecewise linear function can also be used. The functional form of this network is: f (x) = Bmax (Ax + c 1, 0) + c 2, where max is an element-wise operator. The outputs of the network f are the logit scores. Throughout this section, we assume 2 that A ∈ Z p×n, B ∈ Z 2×p, c 1 ∈ R p and c 2 ∈ R 2. For ease of notation, we only consider networks with two outputs, i.e. B 2×p, where the extension to a multi-class output follows naturally and it is discussed in the appendix. Now, since f is a piecewise linear function, each output can be expressed as a tropical rational as per Theorem 1. If f 1 and f 2 refer to the first and second outputs respectively, we have f 1 (x) = H 1 (x) Q 1 (x) and f 2 (x) = H 2 (x) Q 2 (x), where H 1, H 2, Q 1 and Q 2 are tropical polynomials. In what follows and for ease of presentation, we present our main where the network f has no biases, i.e. c 1 = 0 and c 2 = 0, and we leave the generalization to the appendix. Theorem 2. For a bias-free neural network in the form of f (x): R n → R 2 where A ∈ Z p×n and B ∈ Z 2×p, let R(x) = H 1 (x) Q 2 (x) ⊕ H 2 (x) Q 1 (x) be a tropical polynomial. Then: • Let B = {x ∈ R n : f 1 (x) = f 2 (x)} defines the decision boundaries of f, then B ⊆ T (R(x)). • δ (R(x)) = ConvHull (Z G1, Z G2). Z G1 is a zonotope in R n with line segments The proof for Theorem 2 is left for the appendix. Digesting Theorem 2. Theorem 2 can be broken into two major . The first, which is on the algebra side, i.e. finding the solution set to tropical polynomials, states that the decision boundaries linear pieces separating classes C1 and C2. As per Theorem 2, the dual subdivision of this single hidden neural network is the convex hull between the zonotopes Z G 1 and Z G 2. The normals to the dual subdivison δ(R(x)) are in one-to-one correspondence to the tropical hypersurface T (R(x)), which is a superset to the decision boundaries B. Note that some of the normals to δ(R(x)) (in red) are parallel to the decision boundaries. B is a subset of the tropical hypersurface of the tropical polynomial R(x), i.e. T (R(x)). The second , which is on the geometry side, of Theorem 2 relates the tropical polynomial R(x) to the geometric representation of the solution set to R(x), i.e. T (R(x)), referred to as the dual subdivision, i.e. δ(R(x)). In particular, Theorem 2 states that the dual subdivision for a network f is the convex hull of two zonotopes denoted as Z G1 and Z G2. Note that this dual subdivision is a function of only the network parameters A and B. Theorem 2 bridges the gap between the behaviour of the decision boundaries B, through the super-set T (R(x)), and the polytope δ (R(x)), which is the convex hull of two zonotopes. It is worthwhile to mention that discussed a special case of the first part of Theorem 2 for a neural network with a single output and a score function s(x) to classify the output. To the best of our knowledge, this work is the first to propose a tropical geometric formulation of a super-set containing the decision boundaries of a multi-class classification neural network. In particular, the first of Theorem 2 states that one can alter the network, e.g. by pruning network parameters, while preserving the decision boundaries B, if one preserves the tropical hypersurface of R(x) or T (R(x)). While preserving the tropical hypersurfaces can be equally difficult to preserving the decision boundaries directly, the second of Theorem 2 comes in handy. For a bias free network, π becomes an identity mapping with δ(R(x)) = ∆(R(x)), and thus the dual subdivision δ(R(x)), which is the Newton polytope ∆(R(x)) in this case, becomes a well structured geometric object that can be exploited to preserve decision boundaries. (Proposition 3.1.6) showed that the tropical hypersurface is the skeleton of the dual to δ(R(x)), the normal lines to the edges of the polytope δ(R(x)) are in one-to-one correspondence with the tropical hypersurface T (R(x)). Figure 1 details this intimate relation between the decision boundaries, tropical hypersurface T (R(x)), and normals to δ (R(x)). Before any further discussion, we recap the definition of zonotopes. Equivalently, the zonotope can be expressed with respect to the generator matrix U ∈ R p×n, where Another common definition for zonotopes is the Minkowski sum (refer to appendix A for the definition of the Minkowski sum) of a set of line segments that start from the origin with end points u 1,..., u p ∈ R n. It is also well known that the number of vertices of a zonotope is polynomial in the number of line segments. That is to say, While Theorem 2 presents a strong relation between a polytope (convex hull of two zonotopes) and the decision boundaries, it remains unclear how such a polytope can be efficiently constructed. Although the number of vertices of a zonotope is polynomial in the number of its generating line segments, fast algorithms for enumerating these vertices are still restricted to zonotopes with line segments starting at the origin . Since the line segments generating the zonotopes in Theorem 2 have arbitrary end points, we present the next that transforms these line segments into a generator matrix of line segments starting from the origin, as prescribed in Definition 6. This is essential for the efficient computation of the zonotopes in Theorem 2. Proposition 1. Consider p line segments in R n with two arbitrary end points as follows. The zonotope formed by these line segments is equivalent to the zonotope formed by the line segments {[u training dataset, decision boundaries polytope of original network followed by the decision boundaries polytope during several iterations of pruning with different initializations. The proof is left for the appendix. As per Proposition 1, the generator matrices of zonotopes In what follows, we show several applications for Theorem 2. We begin by leveraging the geometric structure to help in reaffirming the behaviour of the lottery ticket hypothesis. The lottery ticket hypothesis was recently proposed by , in which the authors surmise the existence of sparse trainable sub-networks of dense, randomly-initialized, feedforward networks that-when trained in isolation-perform as well as the original network in a similar number of iterations. To find such sub-networks, propose the following simple algorithm: perform standard network pruning, initialize the pruned network with the same initialization that was used in the original training setting, and train with the same number of epochs. They hypothesize that this should in a smaller network with a similar accuracy to the larger dense network. In other words, a subnetwork can have similar decision boundaries to the original network. While in this section we do not provide a theoretical reason for why this proposed pruning algorithm performs favorably, we utilize the geometric structure that arises from Theorem 2 to reaffirm such behaviour. In particular, we show that the orientation of the decision boundaries polytope δ(R(x)), known to be a superset to the decision boundaries T (R(x)), is preserved after pruning with the proposed initialization algorithm of. On the other hand, pruning routines with a different initialization at each pruning iteration will in a severe variation in the orientation of the decision boundaries polytope. This leads to a large change in the orientation of the decision boundaries, which tends to hinder accuracy. To this end, we train a neural network with 2 inputs (n = 2), 2 outputs, and a single hidden layer with 40 nodes (p = 40). We then prune the network by removing the smallest x% of the weights. The pruned network is then trained using different initializations: (i) the same initialization as the original network , (ii) Xavier , (iii) standard Gaussian and (iv) zero mean Gaussian with variance of 0.1. Figure 2 shows the evolution of the decision boundaries polytope, i.e. δ(R(x)), as we perform more pruning (increasing the x%) with different initializations. It is to be observed that the orientation of the polytopes δ(R(x)) vary much more for all different initialization schemes as compared to the lottery ticket initialization. This gives an indication that lottery ticket initialization indeed preserves the decision boundaries throughout the evolution of pruning. Another approach to investigate the lottery ticket could be by observing the polytopes representing the functional form of the network directly, i.e. δ(H {1,2} (x)) and δ(Q {1,2} (x)), in lieu of the decision boundaries polytopes. However, this does not provide conclusive answers to the lottery ticket, since there can exist multiple functional forms, and correspondingly multiple polytopes δ(H {1,2} (x)) and δ(Q {1,2} (x)), for networks with the same decision boundaries. This is why we explicitly focus our analysis on δ(R(x)), which is directly related to the decision boundaries of the network. Further discussions and experiments are left for the appendix. Network pruning has been identified as an effective approach for reducing the computational cost and memory usage during network inference time. While pruning dates back to the work of and , it has recently gained more attention. This is due to the fact that most neural networks over-parameterize commonly used datasets. In network pruning, the task is to find a smaller subset of the network parameters, such that the ing smaller network has similar decision boundaries (and thus supposedly similar accuracy) to the original over-parameterized network. In this section, we show a new geometric approach towards network pruning. In particu- th node, or equivalently removing the two yellow vertices of zonotope ZG 2 does not affect the decision boundaries polytope which will not lead to any change in accuracy. lar, as indicated by Theorem 2, preserving the polytope δ(R(x)) preserves a superset to the decision boundaries T (R(x)), and thus supposedly the decision boundaries themselves. Motivational Insight. For a single hidden layer neural network, the dual subdivision to the decision boundaries is the polytope that is the convex hull of two zonotopes, where each is formed by taking the Minkowski sum of line segments (Theorem 2). Figure 3 shows an example where pruning a neuron in the neural network has no effect on the dual subdivision polytope and equivalently no effect on the accuracy, since the decision boundaries of both networks remain the same. Problem Formulation. Given the motivational insight, a natural question arises: Given an overparameterized binary neural network f (x) = B max (Ax, 0), can one construct a new neural network, parameterized by some sparser weight matricesà andB, such that this smaller network has a dual subdivision δ(R(x)) that preserves the decision boundaries of the original network? In order to address this question, we propose the following general optimization problem The function d defines a distance between two geometric objects. Since the generatorsG 1 and G 2 are functions ofà andB (as per Theorem 2), this optimization problem can be challenging to solve. However, for pruning purposes, one can observe from Theorem 2 that if the generatorsG 1 andG 2 had fewer number of line segments (rows), this corresponds to a fewer number of rows in the weight matrixà (sparser weights). To this end, we observe that ifG 1 ≈ G 1 andG 2 ≈ G 2, thenδ(R(x)) ≈ δ(R(x)), and thus the decision boundaries tend to be preserved as a consequence. Therefore, we propose the following optimization problem as a surrogate to Problem The matrix mixed norm for C ∈ R n×k is defined as C 2,1 = n i=1 C(i, :) 2, which encourages the matrix C to be row sparse, i.e. complete rows of C are zero. Note thatG 1 = Diag[ReLU(B(1, :))+ReLU(−B(2, :))] Ã,G 2 = Diag[ReLU(B(2, :))+ReLU(−B(1, :))]Ã, and Diag(v) rearranges the elements of vector v in a diagonal matrix. We solve the aforementioned problem with alternating optimization over the variablesà andB, where each sub-problem is solved in closed form. Details of the optimization and the extension to multi-class case are left for the appendix. Extension to Deeper Networks. For deeper networks, one can still apply the aforementioned optimization for consecutive blocks. In particular, we prune each consecutive block of the form (Affine,ReLU,Affine) starting from the input and ending at the output of the network. Experiments on Tropical Pruning. Here, we evaluate the performance of the proposed pruning approach as compared to several classical approaches on several architectures and datasets. In particular, we compare our tropical pruning approach against Class Blind (CB), Class Uniform (CU) and Class Distribution (CD);. In Class Blind, all the parameters across all nodes of a layer are sorted by magnitude where x% with smallest magnitudes are pruned. Similar to Class Blind, Class Uniform prunes the parameters with smallest x% magnitudes per node in a layer as opposed to sorting all parameters in all nodes as in Class Blind. Lastly, Class Distribution performs pruning of all parameters for each node in the layer, just as in Class Uniform, but the parameters are pruned based on the standard deviation σ c of the magnitude of the parameters per node. Since fully connected layers in deep neural networks tend to have much higher memory complexity than convolutional layers, we restrict our focus to pruning fully connected layers. We train AlexNet and VGG16 on SVHN, CIFAR10, and CIFAR 100 datasets. We observe that we can prune more than 90% of the classifier parameters for both networks without affecting the accuracy. Moreover, we can boost the pruning ratio using our method without affecting the accuracy by simply retraining the network biases only. Setup. We adapt the architectures of AlexNet and VGG16, since they were originally trained on ImageNet , to account for the discrepancy in the input resolution. The fully connected layers of AlexNet and VGG16 have sizes of and, respectively on SVHN and CIFAR100 with the last layer replaced to 100 for CIFAR100. All networks were trained to baseline test accuracy of (92%,74%,43%) for AlexNet on SVHN, CIFAR10 and CIFAR100, respectively and (92%,92%,70%) for VGG16. To evaluate the performance of pruning, following previous works , we report the area under the curve (AUC) of the pruning-accuracy plot. The higher the AUC is, the better the trade-off is between pruning rate and accuracy. For efficiency purposes, we run the optimization in Problem for a single alternating iteration to identify the rows inà and elements ofB that will be pruned, since an exact pruning solution might not be necessary. The algorithm and the parameters setup to solving is left for the appendix. Results. Figure 4 shows the pruning comparison between our tropical approach and the three aforementioned popular pruning schemes on both AlexNet and VGG16 over the different datasets. Our proposed approach can indeed prune out as much as 90% of the parameters of the classifier without sacrificing much of the accuracy. For AlexNet, we achieve much better performance in pruning as compared to other methods. In particular, we are better in AUC by 3%, 3%, and 2% over other pruning methods on SVHN, CIFAR10 and CIFAR100, respectively. This indicates that the decision boundaries can indeed be preserved by preserving the dual subdivision polytope. For VGG16, we perform similarly well on both SVHN and CIFAR10 and slightly worse on CIFAR100. While the performance achieved here is comparable to the other pruning schemes, if not better, we emphasize that our contribution does not lie in outperforming state-of-the-art pruning methods, but rather in giving a new geometry based perspective to network pruning. We conduct more experiments, where only the biases of the network or the biases of the classifier are fine tuned after pruning. Retraining biases can be sufficient as they do not contribute to the orientation of the decision boundaries polytope, thereafter the decision boundaries, but only a translation. Discussion on biases and more are left for the appendix. DNNs are notoriously known to be susceptible to adversarial attacks. In fact, adding small imperceptible noise, referred to as adversarial attacks, at the input of these networks can hinder their performance. Several works investigated the decision boundaries of neural networks in the presence of adversarial attacks. For instance, analyzed high dimensional geometry of adversarial examples by the means of manifold reconstruction. Also, He et al. (2018b) crafted adversarial attacks by estimating the distance to the decision boundaries using random search directions. In this work, we provide a tropical geometric view to this problem. where we show how Theorem 2 can be leveraged to construct a tropical geometric based targeted adversarial attack. Dual View to Adversarial Attacks. For a classifier f: R n → R k and input x 0 that is classified as c, a standard formulation for targeted adversarial attacks flips the classifier prediction to a particular class t and it is usually defined as follows This objective aims at computing the lowest energy input noise η (measured by D) such that the the new sample (x 0 + η) crosses the decision boundaries of f to a new classification region. Here, we present a dual view to adversarial attacks. Instead of designing a sample noise η such that (x 0 + η) belongs to a new decision region, one can instead fix x 0 and perturb the network parameters to move the decision boundaries in a way that x 0 appears in a new classification region. In particular, let A 1 be the first linear layer of f, such that f (x 0) = g(A 1 x 0). One can now perturb A 1 to alter the decision boundaries and relate the perturbation to the input perturbation as follows From this dual view, we observe that traditional adversarial attacks are intimately related to perturbing the parameters of the first linear layer through the linear system: To this end, Theorem 2 provides explicit means to geometrically construct adversarial attacks by means of perturbing decision boundaries. In particular, since the normals to the dual subdivision polytope δ(R(x)) of a given neural network represent the tropical hypersurface set T (R(x)) which is, as per Theorem 2, a superset to the decision boundaries set B, ξ A1 can be designed to in a minimal perturbation to the dual subdivision that is sufficient to change the network prediction of x 0 to the targeted class t. Based on this observation, we formulate the problem as follows The loss is the standard cross-entropy loss. The first row of constraints ensures that the network prediction is the desired target class t when the input x 0 is perturbed by η, and equivalently by perturbing the first linear layer A 1 by ξ A1. This is identical to f 1 as proposed by. Moreover, the third and fourth constraints guarantee that the perturbed input is feasible and that the perturbation is bounded, respectively. The fifth constraint is to limit the maximum perturbation on the first linear layer, while the last constraint enforces the dual equivalence between input perturbation and parameter perturbation. The function D 2 captures the perturbation of the dual subdivision polytope upon perturbing the first linear layer by ξ A1. For a single hidden layer neural network parameterized as (A 1 + ξ A1) ∈ R p×n and B ∈ R 2×p for the 1 st and 2 nd layers respectively, D 2 can capture the perturbations in each of the two zonotopes discussed in Theorem 2. The derivation, discussion, and extension of to multi-class neural networks is left for the appendix. We solve Problem with a penalty method on the linear equality constraints, Motivational Insight to the Dual View. This intuition is presented in Figure 5. We train a single hidden layer neural network where the size of the input is 2 with 50 hidden nodes and 2 outputs on a simple dataset as shown in Figure 5. We then solve Problem 5 for a given x 0 shown in black. We show the decision boundaries for the network with and without the perturbation at the first linear layer ξ A1. Figure 5 shows that indeed perturbing an edge of the dual subdivision polytope, by perturbing the first linear layer, corresponds to perturbing the decision boundaries and in miss-classifying x 0. Interestingly and as expected, perturbing different decision boundaries corresponds to perturbing different edges of the dual subdivision. In particular, one can see from Figure 5 that altering the decision boundaries, by altering the dual subdivision polytope through perturbations in the first linear layer, can in miss-classifying a previously correctly classified input x 0. MNIST Experiment. Here, we design perturbations to misclassify MNIST images. Figure 7 shows several adversarial examples that change the network prediction for digits 8 and 9 to digits 7, 5, and 4, respectively. In some cases, the perturbation η is as small as = 0.1, where x 0 ∈ n. Several other adversarial are left for the appendix. We again emphasize that our approach is not meant to be compared with (or beat) state of the art adversarial attacks, but rather to provide a novel geometrically inspired perspective that can shed new light in this field. In this paper, we leverage tropical geometry to characterize the decision boundaries of neural networks in the form (Affine, ReLU, Affine) and relate it to well-studied geometric objects such as zonotopes and polytopes. We leaverage this representation in providing a tropical perspective to support the lottery ticket hypothesis, network pruning and designing adversarial attacks. One natural extension for this work is a compact derivation for the characterization of the decision boundaries of convolutional neural networks (CNNs) and graphical convolutional networks (GCNs). Diego Ardila, Atilla P. Kiraly, Sujeeth Bharadwaj, Bokyung Choi, Joshua J. Reicher, Lily Peng, Daniel Tse, Mozziyar Etemadi, Wenxing Ye, Greg Corrado, David P. Naidich, and Shravya Shetty. End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography.. A PRELIMINARIES AND DEFINITIONS. Fact 1. P+Q = {p + q, ∀p ∈ P and q ∈ Q} is the Minkowski sum between two sets P and Q. Fact 2. Let f be a tropical polynomial and let a ∈ N. Then Let both f and g be tropical polynomials, Then Note that V(P(f)) is the set of vertices of the polytope P(f). Theorem 3. For a bias-free neural network in the form of f (x): R n → R 2 where A ∈ Z p×n and B ∈ Z 2×p, and let • If the decision boundaries of f is given by the set B = {x ∈ R n : f 1 (x) = f 2 (x)}, then we have B ⊆ T (R(x)). Note that A + = max(A, 0) and A − = max(−A, 0) where the max is element-wise. The line segment (B(1, j) is one that has the end points A(j, :) + and A(j, :) − in R n and scaled by the constant B(1, j) Proof. For the first part, recall from Theorem1 that both f 1 and f 2 are tropical rationals and hence, Recall that the tropical hypersurface is defined as the set of x where the maximum is attained by two or more monomials. Therefore, the tropical hypersurface of R(x) is the set of x where the maximum is attained by two or more monomials in (H 1 (x) Q 2 (x)), or attained by two or more monomials in (H 2 (x) Q 1 (x)), or attained by monomials in both of them in the same time, which is the decision boundaries. Hence, we can rewrite that as Therefore, we have that Therefore note that, thus we have that The operator+ indicates a Minkowski sum between sets. Note that ConvexHull A ) is the convexhull between two points which is a line segment in Z n with end points that are is a Minkowski sum of line segments which is is a zonotope. Moreover, note that tropically is given as follows. Thus it is easy to see that δ(Q 2 (x)) is the Minkowski sum of the points {(B − (1, j)−B + (2, j))A − (j, :)}∀j in R n (which is a standard sum) ing in a point. Lastly, it is easy to see that δ(H 1 (x))+δ(Q 2 (x)) is a Minkowski sum between a zonotope and a single point which corresponds to a shifted zonotope. A similar symmetric argument can be applied for the second part δ(H 2 (x))+δ(Q 1 (x)). It is also worthy to mention that the extension to network with multi class output is trivial. In that case all of the analysis can be exactly applied studying the decision boundary between any two classes (i, j) where B = {x ∈ R n : f i (x) = f j (x)} and the rest of the proof will be exactly the same. In this section, we derive the statement of Theorem 2 for the neural network in the form of (Affine, ReLU, Affine) with the consideration of non-zero biases. We show that the presence of biases does not affect the obtained as they only increase the dimension of the space, where the polytopes live, without affecting their shape or edge-orientation. Starting with the first linear layer for x ∈ R n, we have with coordinates, and ∆(Q 1i) is a point in (n + 1) dimensions at (A − (i, :), 0), while under π projection, δ(H 1i) is a point in n dimensions at (A + (i, :)), and δ(Q 1i) is a point in n dimensions at (A − (i, :)). It can be seen that under projection π, the geometrical representation of the output of the first linear layer does not change after adding biases. Looking to the output after adding the ReLU layer, we get, and δ(Q 1i) is the point (A − (i, :)). Again, the biases does not affect the geometry of the output after the ReLU layer, since the line segments now are connecting points in (n + 1) dimensions, but after projecting them using π, they will be identical to the line segments of the network with zero biases. Finally, looking to the output of the second linear layer, we obtain Similar arguments can be given for ∆(Q 3i) and δ(Q 3i). It can be seen that the first part in both expressions is a Minkowski sum of line segments, which will give a zonotope in (n + 1), and n dimensions in the first and second expressions respectively. While the second part in both expressions is a Minkowski sum of bunch of points which gives a single point in (n + 1) and n dimensions for the first and second expression respectively. Note that the last dimension of the aforementioned point in n + 1 dimensions is exactly the i th coordinate of the bias of the second linear layer which is dropped under the π projection. Therefore, the shape of the geometrical representation of the decision boundaries with non-zero biases will not be affected under the projection π, and hence the presence of the biases will not affect any of the of the paper. Proposition 1. Consider p line segments in R n with two arbitrary end points as follows. The zonotope formed by these line segments is equivalent to the zonotope formed be the line segments Proof. Let U j be a matrix with U j (:, i) = u i j, i = 1,..., p, w be a column-vector with w(i) = w i, i = 1,..., p and 1 p is a column-vector of ones of length p. Then, the zonotope Z formed by the Minkowski sum of line segments with arbitrary end points can be defined as Note that the Minkowski sum of any polytope with a point is a translation; thus, the follows directly from Definition 6. A ← arg miñ, where c 1 = ReLU(B(1, :)) + ReLU(−B(2, :)) and c 2 = ReLU(B(2, :)) + ReLU(−B(1, :)). Note that the problem is separable per-row ofÃ. Therefore, the problem reduces to updating rows ofà independently and the problem exhibits a closed form solution.. UpdateB + (1, :). Note that C 1 = G 1 − Diag B − (2, :) à and where Diag B − (2, :) Ã. Note the problem is separable in the coordinates ofB + (1, :) and a projected gradient descent can be used to solve the problem in such a way as A similar symmetric argument can be used to update the variablesB + (2, :),B + (1, :) andB − (2, :). Note that Theorem 2 describes a superset to the decision boundaries of a binary classifier through the dual subdivision R(x), i.e. δ(R(x)). For a neural network f with k classes, a natural extension for it is to analyze the pair-wise decision boundaries of of all k-classes. Thus, let T (R ij (x)) be the superset to the decision boundaries separating classes i and j. Therefore, a natural extension to the geometric loss in equation 1 is to preserve the polytopes among all pairwise follows The set S is all possible pairwise combinations of the k classes such that S = {[i, j], ∀i = j, i = 1,..., k, j = 1,..., k}. The generator Z (G (i,j) ) is the zonotope with the generator matrix G (i +,j −) = Diag ReLU(B(i, :)) + ReLU(−B(j, :)) Ã. However, such an approach is generally computationally expensive, particularly, when k is very large. To this end, we make the following observation thatG (i +, j −) can be equivalently written as a Minkowski sum between two sets zonotopes with the generators That is to say, ZG. This follows from the associative property of Minkowski sums given as follows: be the set of n line segments. Then we have that S = S 1+...+S n = P+V where the sets P =+ j∈C1 S j and V =+ j∈C2 S j where C 1 and C 2 are any complementary partitions of the set Hence,G (i +,j −) can be seen a concatenation betweenG (i +) andG (j −). Thus, the objective in 10 can be expanded as follows The approximation follows in a similar argument to the binary classifier case where approximating the generators. The last equality follows from a counting argument. We solve the objective for all multi-class networks in the experiments with alternating optimization in a similar fashion to the binary classifier case. Similarly to the binary classification approach, we introduce the 2,1 to enforce sparsity constraints for pruning purposes. Therefore the overall objective has the form For completion, we derive the updates forà andB. A = arg miñ Similar to the binary classification, the problem is seprable in the rows ofÃ. and a closed form solution in terms of the proximal operator of 2 norm follows naturally for eachÃ(i, :). UpdateB + (i, :). Note that the problem is separable per coordinates of B + (i, :) and each subproblem is updated as: A similar argument can be used to updateB − (i, :) ∀i. Finally, the parameters of the pruned network will be constructed A ←à and B ←B + −B −. Input: In this section, we are going to derive an algorithm for solving the following problem. The function D 2 (ξ A) captures the perturbdation in the dual subdivision polytope such that the dual subdivion of the network with the first linear layer A 1 is similar to the dual subdivion of the network with the first linear layer A 1 + ξ A1. This can be generally formulated as an approximation to the following distance function This can thereafter be extended to multi-class network with k classes as follows. , we take. Therefore, we can write 11 as follows To enforce the linear equality constraints A 1 η − ξ A1 x 0 = 0, we use a penalty method, where each iteration of the penalty method we solve the sub-problem with ADMM updates. That is, we solve the following optimization problem with ADMM with increasing λ such that λ → ∞. For ease of notation, lets denote where The augmented Lagrangian is thus given as follows Thereafter, ADMM updates are given as follows Updating η: Updating w: It is easy to show that the update w is separable in coordinates as follows Updating z: showed that the linearized ADMM converges for some non-convex problems. Therefore, by linearizing L and adding Bergman divergence term η, we can then update z as follows It is worthy to mention that the analysis until this step is inspired by with modifications to adapt our new formulation. Updating ξ A: The previous problem can be solved with proximal gradient method. In this section, we are going to describe the settings and the values of the hyper-parameters that we used in the experiments. Moreover, we will show more since we have limited space in the main paper. We begin by throwing the following question. Why investigating the tropical geometrical perspective of the decision boundaries is more important than investigating the tropical geometrical representation of the functional form of the network? In this section, we show one more experiment that differentiate between these two views. In the following, we can see that variations can happen to the tropical geometrical representation of the functional form (zonotopes in case of single hidden layer neural network), but the shape of the polytope of the decision boundaries is still unchanged and consequently, the decision boundaries. For this purpose, we trained a single hidden layer neural network on a simple dataset like the one in Figure 2, then we do several iteration of pruning, and visualise at each iteration both the polytope of the decision boundaries and the zonotopes of the functional representation of the neural network. It can be easily seen that changes in the zonotopes may not change the shape of the decision boundaries polytope and consequently the decision boundaries of the neural network. And thus it can be clearly seen that our formulation, which is looking at the decision boundaries polytope is more general, precise and indeed more meaningful. Moreover, we conducted the same experiment explained in the main paper of this section on another dataset to have further demonstration on the favour that the lottery ticket initialization has over other initialization when pruning and retraining the pruned model. It is clear that the lottery initializations is the one that preserves the shape of the decision boundaries polytope the most. In the tropical pruning, we have control on two hyper-parameters only, namely the number of iterations and the regularizer coefficient λ which controls the pruning rate. In all of the experiments, we ran the algorithm for 1 iteration only and we increase λ starting from 0.02 linearly with a factor of 0.01 to reach 100% pruning. It is also worthy to mention that the output of the algorithm will be new sparse matricesÃ,B, but the new network parameters will be the elements in the original matrices A, B that have indices correspond to the indices of non-zero elements inÃ,B. By that, the algorithm removes the non-effective line segments that do not contribute to the decision boundaries polytope, without changing the non-deleted segments. Above all, more of pruning of AlexNet and VGG16 on various datasets are shown below. For all of the experiments, {2, λ, η, ρ} had the values {1, 10 −3, 2.5, 1} respectively. the value of 1 was 0.1 when attacking the -fours-images, and 0.2 for the rest of the images. Finally, we show extra of attacking the decision boundaries of synthetic data in R 2 and MNIST images by tropical adversarial attacks. We thank R3 for the time spent reviewing the paper. It is though not clear to the authors the main reason behind the initial score of weak reject as R3 seems to have very generic questions about our work but not a particular criticism of the novelty/contribution that we can address in our rebuttal. We hope that the following response addresses and clarifies some key elements. Moreover, we want to bring to the attention of R3 that we have addressed the comments/typos/suggestions of all reviewers in the revised version and marked them in blue. Q1: What benefit does introducing tropical geometry brings in terms of theoretical analysis? Does using tropical geometry give us the theoretical that traditional analysis can not give us? If so, what is it? I am trying to understand why the authors use this tool. The authors should be explicit in their motivation so that the readers are clear about the contribution of this paper. More specifically, from my perspective, tropical semiring, tropical polynomials and tropical rational functions all can be represented with the standard mathematical tools. Here they are just redefining several concepts. As discussed thoroughly in the introduction (last paragraph of page 1), tropical geometry is the younger twin to algebraic geometry on a particular semiring defined in a way to align with the study of piecewise linear functions. The early definitions stated in the paper (1 to 5) are well known in the TG literature and were restated for the completion of this paper. While it is true that the definitions can be represented with standard mathematical tools; however, this misses the fundamental powerful element TG promises. TG transforms algebraic problems of piecewise linear nature to a combinatoric problem on general polytopes. To that end, Zhang et. al. 2018 (to the best of our knowledge the only work at the intersection between TG and DNNs) rederived classical (upper bound on the number of linear pieces of DNNs) in a much simpler analysis by counting vertices on polytopes. In this work, instead of studying the functional representation of piecewise linear DNNs, we study their decision boundaries using the lens of TG. To wit, the geometric characterization of the decision boundaries of DNNs developed in Theorem 2 cannot be attained using standard mathematical tools. More specifically, Theorem 2 represented a superset to the decision boundaries (the tropical hypersurface T (R(x)), with a geometric structure that is the convex hull between two zonotopes. While this by itself opens doors for a family of new geometrically motivated regualrizers for training DNNs that are in direct correspondence with the behaviour of the decision boundaries, we do not dwell on training beyond this point and leave that for future work. However, this new allowed for re-affirmation to the lottery ticket hypothesis in a new fresh perspective. Moreover, we propose new optimization problems that are geometrically motivated (based on Theorem 2) for several classical problems, i.e. network pruning and adversarial attacks that were not possible before and have provided several new insights and directions. That is we show an intimate relation between network perturbations (through decision boundary polytope perturbations) and the construction of adversarial attacks (input perturbations). In Experiments on Tropical Pruning, the authors mentioned we compare our tropical pruning approach against Class Blind (CB), Class Uniform (CU), and Class Distribution (CD) methods. What is Class Blind, Class Uniform and Class Distribution? There seems to be an error here Figure 5 shows the pruning comparison between our tropical approach..., i think Figure 5 should be Figure 4. We have added the definition of the pruning methods of in the revised version of the paper for completion, and corrected the typo in the Figure reference. Q3: In the adversarial attack part, is the authors proposing a new attack method? If so, then the authors should report the test accuracy under attack. Also, the experimental should not be restricted to MNIST dataset. I am also not sure about the attack settings here, the authors said Instead of designing a sample noise such that (x0 + η) belongs to a new decision region, one can instead fix x0 and perturb the network parameters to move the decision boundaries in a way that x0 appears in a new classification region.. Why use this setting? Are there any intuitions? Since this is different from traditional adversarial attack terminology, the authors should stop using adversarial attacks as in tropical adversarial attacks because it is really misleading. As highlighted in the last sentence in section 6, we are not competing against other attacks, but we rather show how this new geometric view to the decision boundaries provided by the TG analysis in Theorem 2 can be leveraged for the construction of adversarial attacks. We want to emphasize to R3 that the polytope representing the decision boundaries (convex hull of two zonotopes as per Theorem 2) is a function of the network parameters and not of the input space. Thus, it is not initially clear how one can frame the adversarial attacks problem in this new fresh tropical setting since adversarial attacks is the task of perturbing the input space as opposed to the parameters space of the network ing in a flip in the prediction. In the tropical adversarial attacks section, we show that the problem of designing an adversarial attack x 0 +η that flips the network prediction is closely related to the problem of flipping the network prediction by perturbing the network parameters in the first layer A 1 + ζ A1 where both problems are related through a linear system. That is to say, if one finds ζ A1 that perturbs the geometric structure (convex hull between two zonotopes, i.e. decision boundaries) sufficiently enough to flip the network prediction, one can find an equivalent pixel adversarial attack η by solving the linear system A 1 η = ζ A1 x 0 that flips the prediction of the original unperturbed network (see the end of page 7). We thereafter propose Problem incorporating the geometric information from Theorem 2 where the linear system is accounted for in the constraints set. We propose an algorithm to solve the problem (a mix of penalty and ADMM) detailed in Algorithm 1 in the appendix. The solution to problem 5 by applying Algorithm 1 in the construction of adversarial attacks (η) that indeed flip the network prediction over all tested examples on the MNIST dataset. We thank R2 for the time spent reviewing the paper. We also thank R2 for acknowledging our technical and theoretical contributions. Please note that we have addressed the comments/typos/suggestions of all reviewers in the revised version and marked them in blue. Follows our response. Q1: This paper needs to be placed properly among several important missing references on the decision boundary of deep neural networks. In particular, using introduced tropical geometry perspective, how we can obtain the complexity of the decision boundary of a deep neural network? The two works referenced by R2 are not directly related to the body of our work. Below, we summarize both works and state how our work is vastly different from both. The authors of show that under certain assumptions, the decision boundaries of the last fully connected layer converges to an SVM classifier. That is to say, the features learnt in deep neural networks are linearly separable with max margin type linear classifier. On the other hand, the authors of showed that the decision regions of neural networks with width smaller than the input dimension are unbounded. In our work, we use a new type of analysis (tropical geometry) to represent the set of decision boundaries B through its superset T (R(x)) that is the solution set to the tropical polynomial R(x). We then show that this solution set is related to a geometric structure referred to as the decision boundaries polytope (convex hull between two zonotopes), this is analogous to constructing newton polytopes for the solution sets to classical polynomials in algebraic geoemtry. The normals to the edges of this polytope are parallel to the superset of the decision boundaries T (R(x)). That is to say, if one processes the polytope in an way that preserves the direction of the normals, the decision boundaries of the network are preserved. This is the base idea behind all later experiments. In general, this new representation presents a new fresh revisit to the lottery ticket hypothesis and an utterly new view to network pruning and adversarial attacks. We do believe this new representation can be of benefit to other applications and can open doors for a family of new geometrically inspired network regularizers as well. Q3: The second part of Theorem 2 should be explained straightforwardly and clearly as it plays an important role in the subsequent and applications. We have added a "Digesting Theorem 2" paragraph in the revised version and rearranged the structure a bit around Theorem 2. Most of the parameters (memory complexity) are the in fully connected layers. For example, the convolutional part of VGG16 has 14,714,688 parameters, whereas the fully connected layers have 262,000,400 parameters in total which is 17 times larger. Similarly, the convolutional part of AlexNet has 3,747,200 parameters while only the first fully connected layer has 37,752,832 parameters. However, efficiently extending the tropical pruning to convolutional layers is a nontrivial interesting direction. Generally speaking, convolutional layers fit our framework naturally since a convolutional kernel can be represented with a structured topelitz/circulant matrix. However, a question of efficiency still remains as one still needs to construct the underlying structured matrix representing the convolutional kernel. Thereafter, a direction of interest is the tropical formulation of the network pruning problem as a function of the convolutional kernels surpassing the need for the construction of the dense representation of the kernel. We keep this for future work. Q5: For the similarity measure. Comparing the exact decision boundaries between two different architectures can be very difficult in the sense where decision boundaries for a two-class output network f are defined as {x ∈ R n : f 1 (x) = f 2 (x)}. Another approach to compare decision boundaries, which is proposed by our work, is by computing the distance between the the dual subdivision polytope (δ(R(x))) of the tropical polynomials R(x) representing two different architectures. This is since the normals to the edges of the polytope (δ(R(x))) are parallel to a superset of the decision boundaries (see Figure 1). This is exactly the proposed objective where d is a distance function to compare the orientation between two general polytopes. Since finding a good choice for d is generally difficult we instead approximate it by comparing the generators constructing the (δ(R(x))) in Euclidean distance for ease (objective). Experimentally and following prior art in the pruning literature (Han et. al. 2015), to compare the effectiveness of the pruning scheme, we compare the test accuracies across architectures as a function of the pruning ratio. Regardless, the reviewer is right about that similar test accuracies does not imply similar decision boundaries but rather only an indication. In adversarial examples generation, typically for a pre-trained deep neural network model one is interested in generating examples that are misclassified by the model while they resemble real instances. In this setting, we keep the model and thus its decision boundary intact. In this paper, nevertheless, aiming at generating adversarial examples, the decision boundary and thus the (pre-trained) model is altered. By chaining the decision boundary, however, the model's decisions for original real samples might change as well. Therefore, it is not clear to the reviewer how the introduced method is comparable to the well-established adversarial example generation setting. The new approach is definitely comparable to the well-establisehd adversarial example generation setting. Let us explain. The new analysis provided by Theorem 2, allows to present the decision boundaries geometrically as a convex hull between two zonotopes which is a function of only the network parameters and not the input space. Thus, it not clear how one can frame the adversarial attacks problem in this new fresh tropical setting since adversarial attacks is the task of perturbing the input space as opposed to the parameters space of the network ing in a flip in the prediction. In the tropical adversarial attacks section, we show that the problem of designing an adversarial attack x 0 + η that flips the network prediction is closely related to the problem of flipping the network prediction by perturbing the network parameters in the first layer A 1 + ζ A1 where both problems are related through a linear system. That is to say, if one finds ζ A1 (perturbations in the first linear layer) that perturbs the geometric structure (convex hull between two zonotopes, i.e. decision boundaries) sufficiently enough to flip the network prediction, one can find an equivalent pixel adversarial attack η by solving the linear system A 1 η = ζ A1 x 0 that flips the prediction of the original unperturbed network (see the end of page 7). We incorporate this in an overall objective where the linear system is incorporated as a constraint. Upon solving with the proposed Algorithm 1, we attack the original network (unperturbed) with the adversarial attack η. Therefore, this is comparable to the classical adversarial attacks framework. This approach indeed ed into flipping the network prediction over all tested examples on the MNIST dataset. As highlighted at the end of section 6, we do not aim in our approach to outperform the state of the art adversarial attacks, but rather to provide a novel geometrically inspired perspective that can shed new light in this field. Q7: Two previous papers investigated the decision boundary of the deep neural networks in the presence of adversarial examples. Please discuss how the introduced method in this paper is placed among these methods. The work of analyzed the geometry of adversarial examples by means of manifold reconstruction to study the trade off between robustness under different norms. On the other hand, crafted adversarial attacks by estimating the distance to the decision boundaries using random search directions. Both of the papers made a local estimation of the decision boundary around the attacked point to construct the adversarial attack to the input image. In our work, we geometrically characterized the decision boundaries in Theorem 2 where the polytope (convex hull of two zonotopes) is only a function of the network parameters and NOT the input space. We presented a dual view to adversarial attacks in which one can construct adversarial examples by investigating network parameters perturbations that in the largest perturbation to this polytope representing the decision boundaries. The scope of our work, and unlike prior art, is focused towards a new geometric polytope representation of the decision boundary in the network parameter space (not the input space) through a new novel analysis. We have added a discussion of both papers in the adversarial attacks section (Section 6). We have addressed the concerns of R2 and left the changes in blue in the revised version. "On the decision boundary of deep neural networks". SLi, Yu and Richtarik, Peter and Ding, Lizhong and Gao, Xin. We thank R1 for the constructive detailed thorough review of the paper and for acknowledging our contributions and the new insights. Follows our response to R1's concerns. In regards to clarity, exposition and focus. To improve the clarity and exposition, we have added several paragraphs in the revised version of the paper. The revised edits are marked in blue. As in regards to the focus, we found that it is challenging within a reasonable time to carry out this major change in the paper. To that end, we have done our best to further elaborate on several key in the paper. For instance, we have merged the paragraph above the contributions with the contributions paragraph. We have added another paragraph dissecting Theorem 2. We have added some few relevant references that are essential for the context of motivating tropical adversarial attacks. In regards to the suggestions. • Adding the information that the semiring lacks the additive inverse. This has been addressed in the revised version. •Adding tropical quotient to definition 1. This has been addressed in the revised version. • Definition of π and the upper faces. Indeed, π is a projection operator that drops the last coordinate. As for upper faces, the formal definition is given as follows, for a polytope P, F is an upper face of P if x + te / ∈ P for any x ∈ F, t > 0 where e is a canonical vector. That is the faces that can be seen from "above". A good graphical example can be found in Figure 2 from Zhang et. al. 2018. We dropped this definition from the paper as it may add some confusion while it does not play an important role in the later analysis. • Theorem 2 lacks intuitive formulation. This has been addressed in the revised version and we have rearranged the structure of text below Theorem 2. • Regarding the issue of the tropical hypersurface. Indeed, the superset is in terms of set theory. That is to say, the set of decision boundaries B is a supset of the tropical hypersurface set T (R(x)). • Regarding Figure 2. The color map represents the compression percentage. We have added a legend to Figure 2. Note that the second figure in Figure 2 tilted "original polytope" represents the polytope of the dual subdivision (convex hull between two zonotopes). While the polytope seems to have only 4 vertices, there are in fact many other overlapping vertices and thereafter many small edges between all the seemingly overlapping vertices. It is to observe that the normals to only the 4 major edges in the "original polytope" are indeed parallel to the decision boundaries plotted by performing forward passes through the network in the first figure titled "Input Space". Note that despite the fact that more compression is performed, the orientation of the overall polytope is preserved for the lottery ticket initialization and thereafter preserving the main orientation of the decision boundaries. This is unlike the the other types of initialization where the orientation of the polytope is vastly different with different compression ratios ing into a larger change in the orientation of the decision boundaries. • I would suggest to place Figure 1 after stating Theorem 2, since it is only referenced later on. Furthermore, the red structures are somewhat confusing. According to Theorem 2, the decision boundary is a subset of the hypersurface, right? What is the relation of the red structures in the convex hull visualisation? The caption states that they are normals, but as far as I can tell, this has not been formalised anywhere in the paper (it is used later on, though). Correct. The decision boundaries are subsets of the tropical hypersurfaces. However, it has been shown by (Propositon 3.1.6) that the tropical hypersurface T to any d variate tropical polynomial is the (d-1)-skeleton of the polyhedral complex dual to the dual subdivision δ. This implies that the normals to the edges (faces in higher dimensions) are parallel to the tropical hypersurface. If R1 is interested in learning more about this, an excellent starting point to build up this intuition is the work of Erwan Brugall and Kristin Shaw "A bit of tropical geometry". Please refer to section 2.2 "Dual subdivisions" pages 6 and 7. • The discussion about the functional form. We elaborate on this in section F in the appendix. Instead of investigating the decision boundaries polytope of the tropical polynomial representing the decision boundaries R(x), we analyze the 4 different tropical polynomials representing the 2 different classes. Recall each output function of the network is a tropical rational of two tropical polynomials giving rise to 4 different polytopes. Figure 7 shows that the decision boundary polytope δ(R(x)) with the lottery ticket is hardly changed while pruning the network (first column in Figure 7). On the contrary, the zonotopes (dual subdivisions of the 4 different tropical polynomials) vary much more significantly (coloumns 2,3,4,5). This demonstrates that there can exist different tropical polynomials representing the functional form of the network (i.e. H 1,2 and Q 1,2) while having the same structure for the decision boundary polytope the corresponding first figure in the same row of Figure 7. This is a mere observation and worth investigating in future work. • In Section 4, how many experiments of the sort were performed? I find this a highly instructive view so I would love to see more experiments of this sort. Do these claims hold over multiple repetitions and for (slightly) larger architectures as well? We already had one extra experiment (Figure 8) in the appendix for another data. We have also based on the suggestion of R1 added two more experiments (Figure 9) on two other datasets. Note that extracting the decision boundaries polytope for larger architectures is much more difficult for two different reasons. First, for deeper networks beyond the structure Affine-ReLU-Affine, the decision boundaries polytope is generic and no longer enjoys the nice properties zonotopes exhibit. Enumerating their vertices rapidly turns to a computationally intractable problem. Secondly, one can perhaps visualize the polytope for networks with 3 dimensional input but it gets trickier beyond that. • Regarding that the claim that orientations are preserved should be formalised. That is an excellent question. We have investigated several metrics that try to capture information about the orientation of a given polytope. For instance, we have investigated the feature that is the histogram of the oriented normals. That is a histogram of angles for all the normals to edges given a polytope. We have also investigated the Hausdorff distance as a metric between polytopes. However, we have decided to keep this for a future direction as this is by itself a entirely new line of work. That is designing the distance functions d between polytopes that captures orientation information that can be used for tropical pruning (objective 8) or perhaps other applications. Another interesting direction is whether such a distance function can be learnt in a meta-learning fashion. For now, we restrict the experiments to the approximation used in objective which for now only captures the distance between the sets of generators of the zonotopes in Eucledian sense. • Adding link to the definition of minkowski sum in the appendix. This has been addressed in the revised version. • Description to other pruning methods. This has been addressed in the revised version where we have added the definition to all pruning competitors. • About the plots in Figure 4. In the experiments of Figure 4, there is no stochasticity. All base networks (AlexNet and VGG16) are trained before hand to achieve the state-of-art baseline on the respective datasets (SVHN, CIFAR10 and CIFAR100). The networks are then fixed and several pruning schemes, including the tropical approach, are applied with a varying level of pruning ratio. We do not re-train the networks after each pruning step. Thus there exists no source of randomness. However, this is still an excellent observation. This is since we conduct experiments in the appendix where after each pruning ratio we fine tune only the biases of the classifier (Figure 10) or we fine tune the biases of the complete network (Figure 11). In such experiments, due to the fine tuning step, we will consider in the final version to report the averaged over multiple runs. • In Section 6, I find the comment on normals generating a superset to the decision boundaries hard to understand. Indeed, the wording of the sentence was sub optimal. The statement was to reassure what has been established in earlier sections that the normals to the decision boundary polytope δ(R(x)) represent the tropical hypersurface set T (R(x)) which is a super set to the decision boundaries set B as per Theorem 2. We have rephrased the sentence. • Take-away message regarding perturbing the decision boundaries. The proposed approach of perturbing the decision boundaries is a mere dual view for adversarial attacks. That is to say, to flip a network prediction for a sample x 0, one can either adversarially perturb the sample (add noise) to cross the decision boundary to a new classification region or one can perturb the decision boundaries to move closer to the sample x 0 to appear as if it is in a new classification region. Figure 5, demonstrates the later visually through perturbing the decision boundary polytope. One can observe that perturbing the correct edge in the polytope in Figure 5, corresponds to altering a specific decision boundary. Figure 5, indeed as correctly pointed out of R1, is a feasibility study showing that one can perturb decision boundary by perturbing the dual subdiviosn polytope. However, the real take away message is that these two views (perturbing the input or the parameter space) are intimately related through a linear system as discussed in the subsection titled "Dual View to Adversarial Attacks". We propose an objective function (Problem 5) and an algorithm (Algorithm 1) to tackle this problem. The solution to problem provides an INPUT perturbation that in altering the network prediction. That is to say, the new framework allows for incorporating geometrically motivated objective function towards constructing classical adversarial attacks. • Regarding the future extension on CNNs and GCNs. We are currently investigating efficient extension of the current to convolutional layers. Note that while convolutional layers can be represented with a large structured topelitz/circulant matrix, we are interested in extensions that allow for similar analysis but as a function of the convolutional kernel surpassing the need to constructing convolutional matrix. The GCNs is definitely an exciting excellent future direction that we have not yet entertained. • Minor style issues. We have addressed all the style issues in the revised version. | Tropical geometry can be leveraged to represent the decision boundaries of neural networks and bring to light interesting insights. | 652 | scitldr |
First-order methods such as stochastic gradient descent (SGD) are currently the standard algorithm for training deep neural networks. Second-order methods, despite their better convergence rate, are rarely used in practice due to the pro- hibitive computational cost in calculating the second-order information. In this paper, we propose a novel Gram-Gauss-Newton (GGN) algorithm to train deep neural networks for regression problems with square loss. Our method draws inspiration from the connection between neural network optimization and kernel regression of neural tangent kernel (NTK). Different from typical second-order methods that have heavy computational cost in each iteration, GGN only has minor overhead compared to first-order methods such as SGD. We also give theoretical to show that for sufficiently wide neural networks, the convergence rate of GGN is quadratic. Furthermore, we provide convergence guarantee for mini-batch GGN algorithm, which is, to our knowledge, the first convergence for the mini-batch version of a second-order method on overparameterized neural net- works. Preliminary experiments on regression tasks demonstrate that for training standard networks, our GGN algorithm converges much faster and achieves better performance than SGD. First-order methods such as Stochastic Gradient Descent (SGD) are currently the standard choice for training deep neural networks. The merit of first-order methods is obvious: they only calculate the gradient and therefore are computationally efficient. In addition to better computational efficiency, SGD has even more advantages among the first-order methods. At each iteration, SGD computes the gradient only on a mini-batch instead of all training data. Such randomness introduced by sampling the mini-batch can lead to better generalization (; ; ; ;) and better convergence (; a; b), which is crucial when the function class is highly overparameterized deep neural networks. Recently there is a huge body of works trying to develop more efficient first-order methods beyond SGD (; ; ;). Second-order methods, despite their better convergence rate, are rarely used to train deep neural networks. At each iteration, the algorithm has to compute second order information, for example, the Hessian or its approximation, which is typically an m by m matrix where m is the number of parameters of the neural network. Moreover, the algorithm needs to compute the inverse of this matrix. The computational cost is prohibitive and usually it is not even possible to store such a matrix. Formula and require subtle implementation tricks to use backpropagation. In contrast, GGN has simpler update rule and better guarantee for neural networks. In a concurrent and independent work, Zhang et al. (2019a) showed that natural gradient method and K-FAC have a linear convergence rate for sufficiently wide networks in full-batch setting. In contrast, our method enjoys a higher-order (quadratic) convergence rate guarantee for overparameterized networks, and we focus on developing a practical and theoretically sound optimization method. We also reveal the relation between our method and NTK kernel regression, so using based on NTK (b), one can easily give generalization guarantee of our method. Another independent work proposed a preconditioned Q-learning algorithm which has similar form of our update rule. Unlike the methods considered in Zhang et al. (2019a); which contain the learning rate that needed to be tuned, our derivation of GGN does not introduce a learning rate term (or understood as suggesting that the learning rate can be fixed to be 1 to get good performance which is verified in Figure 2 (c)). Nonlinear least squares regression problem is a general machine learning problem. Given data pairs {x i, y i} n i=1 and a class of nonlinear functions f, e.g. neural networks, parameterized by w, the nonlinear least squares regression aims to solve the optimization problem min w∈R m L(w) = 1 2 In the seminal work , the authors consider the case when f is a neural network with infinite width. They showed that optimization on this problem using gradient flow involves a special kernel which is called neural tangent kernel (NTK). The follow-up works further extended the relation between optimization and NTK which can be concluded in the following lemma: Lemma 1 (Lemma 3.1 in Arora et al. (2019a), see also; ). Consider optimizing problem by gradient descent with infinitesimally small learning rate: dwt dt = −∇L(w t). where w t is the parameters at time t. Let f t = (f (w t, x i)) n i=1 ∈ R n be the network outputs on all x i's at time t, and y = (y i) n i=1 be the desired outputs. Then f t follows the following evolution: where G t is an n × n positive semidefinite matrix, i.e. the Gram matrix w.r.t. the NTK at time t, whose (i, j)-th entry is ∇ w f (w t, x i), ∇ w f (w t, x j). The key idea of and its extensions (b; a; ; a; ; ; ; b; a; ;) is that when the network is sufficiently wide, the Gram matrix at initialization G 0 is close to a fixed positive definite matrix defined by the infinite-width kernel and G t is close to G 0 during training for all t. Under this situation, G t remains invertible, and the above dynamics is then identical to the dynamics of solving kernel regression with gradient flow w.r.t. the current kernel at time t. In fact, Arora et al. (2019a) rigorously proves that a fully-trained sufficiently wide ReLU neural network is equivalent to the kernel regression predictor. As pointed out in , the idea of NTK can be summarized as a linear approximation using first order Taylor expansion. We give an example of this idea on the NTK at initialization: where ∇ w f (w 0, x) can then be viewed as an explicit expression of feature map at x, w − w 0 is the parameter in reproducing kernel Hilbert space (RKHS) induced by NTK and f (w, The idea of linear approximation is also used in the classic Gauss-Newton method to obtain an acceleration algorithm for solving nonlinear least squares problem. Concretely, at iteration t, Gauss-Newton method takes the following first-order approximation: where w t stands for the parameter at iteration t. We note that this is also the linear expansion for deriving NTK at time t. According to Eq. and, to update the parameter, one can instead solve the following problem. where f t, y have the same meaning as in Lemma 1, and J t = (∇ w f (w t, x 1), · · ·, ∇ w f (w t, x n)) ∈ R n×m is the Jacobian matrix. A necessary and sufficient condition for w to be the solution of Eq. is Below we will denote H t:= J t J t ∈ R m×m. For under-parameterized model (i.e., the number of parameters m is less than the number of data n), H t is invertible, and the update rule is This can also be viewed as an approximate Newton's method using H t = J t J t to approximate the Hessian matrix. In fact, the exact Hessian matrix is In the case when f is only mildly nonlinear w.r.t. w at data point x i's, ∇ 2 w f (w t, x i) ≈ 0, and H t is close to the real Hessian. In this situation, the behavior of the Gauss-Newton method is similar to that of Newton's method, and thus can achieve a superlinear convergence rate . The classic second-order methods using approximate Hessian such as Gauss-Newton method described in the previous section face obvious difficulties dealing with the intractable approximate Hessian matrix when the regression model is an overparameterized neural network. In Section 3.1, we develop a Gram-Gauss-Newton (GGN) method which is inspired by NTK kernel regression and does not require the computation of the approximate Hessian. In Section 3.2, we show that for sufficiently wide neural networks, GGN has quadratic convergence rate. In Section 3.3, we show that the additional computational cost (per iteration) of GGN compared to SGD is small. We now describe our GGN method to learn overparameterized neural networks for regression problems. As mentioned in the previous sections, for sufficiently wide networks, using gradient descent for solving the regression problem has similar dynamics as using gradient descent for solving NTK kernel regression (Lemma 1) w.r.t. NTK at each step. However, one can also solve the kernel regression problem w.r.t. the NTK at each step immediately using the explicit formula of kernel regression. By explicitly solving instead of using gradient descent to solve NTK kernel regression, one can expect the optimization to get accelerated. We propose our Gram-Gauss-Newton (GGN) method to directly solve the NTK kernel regression with Gram matrix G t at each time step t. Note that the feature map of NTK at time t, based on the approximation in Eq., can be expressed as x → ∇ w f (w t, x), and the linear parameters in RKHS are w − w t, also the target is f (w, x i) − f (w t, x i). Therefore, the kernel (ridgeless) regression solution where J t,S is the matrix of features at iteration t computed on the training data set S which is equal to the Jacobian, f t,S and y S are the vectorized outputs of neural network and the corresponding targets on S respectively, and is the Gram matrix of the NTK on S. One may wonder what is the relation between our derivation from NTK kernel regression and the Gauss-Newton method. We point out that for overparameterized models, there are infinitely many solutions of Eq. but our update rule essentially uses the minimum norm solution. In other words, the GGN update rule re-derives the Gauss-Newton method with the minimum norm solution. This somewhat surprising connection is due to the fact that in kernel learning, people usually choose a kernel with powerful expressivity, i.e. the dimension of feature space is large or even infinite. However, by the representer theorem , the solution of kernel (ridgeless) regression lies in the n-dimensional subspace of RKHS and minimizes the RKHS norm. We refer the readers to Chapter 11 of for details. As mentioned in Section 1, the design of learning algorithms should consider not only optimization but also generalization. It has been shown that using mini-batch instead of full batch to compute derivatives is crucial for the learned model to have good generalization ability (; ; ; ;). Therefore, we propose a mini-batch version of GGN. The update rule is the following: where B t is the mini-batch used at iteration t, J t,Bt and G t,Bt are the Jacobian and the Gram matrix computed using the data of B t respectively, and f t,Bt, y Bt are the vectorized outputs and the corresponding targets on B t respectively. G t,Bt = J t,Bt J t,Bt is a very small matrix when using a typical batch size. One difference between Eq. and Eq. is that our update rule only requires to compute the Gram matrix G t,Bt and its inverse. Note that the size of G t,Bt is equal to the size of the mini-batch and is typically very small. So this also greatly reduces the computational cost. Using the idea of kernel ridge regression (which can also be viewed as Levenberg-Marquardt extension of Gauss-Newton method), we introduce the following variant of GGN: where λ > 0 is another hyper-parameter controlling the learning process. Our algorithm is formally described in Algorithm 1. Fetch a mini-batch B t from the dataset. Calculate the Jacobian matrix J t,Bt. Calculate the Gram matrix G t,Bt = J t,Bt J t,Bt. Update the parameter by w t+1 = w t − J t,Bt (λG t,Bt + αI) −1 (f t,Bt − y Bt). 8: In this subsection, we show that for two-layer neural networks, if the width is sufficiently large, then: Full-batch GGN converges with quadratic convergence rate. Mini-batch GGN converges linearly. (For clarity, here we only present a proof for two-layer neural networks, but we believe that it is not hard for the to be extended to deep neural networks using the techniques developed in Du et al. (2018a); ). As we explained through the lens of NTK, the is a consequence of the fact that for wide enough neural networks, if the weights are initialized according to a suitable probability distribution, then with high probability the output of the network is close to a linear function w.r.t. the parameters (but nonlinear w.r.t. the input of the network) in a neighborhood containing the initialization point and a global optimum. Although the neural networks used in practice are far from that wide, this still motivates us to design the GGN algorithm. Neural network structure. We use the following two-layer network where x ∈ R d is the input, M is the network width, W = (w 1, · · ·, w M) and σ(·) is the activation function. Each entry of W is i.i.d. initialized with the standard Gaussian distribution w r ∼ N (0, I d) and each entry of a is initialized from the uniform distribution on {±1}. Similar to Du et al. (2018b), we only train the network on parameter W just for the clarity of the proof. We also assume the activation function σ(·) is -Lipschitz and β-smooth, and and β are regarded as O absolute constants. The key finding, as pointed out in; Du et al. (2018b; a), is that under such initialization, the Gram matrix G has an asymptotic limit, which is, under mild conditions (e.g. input data not being degenerate etc., see Lemma F.2 of Du et al. (2018a) ), a positive definite matrix Assumption 1 (Least Eigenvalue of the Limit of Gram Matrix). We assume the matrix K defined in above is positive definite, and denote its least eigenvalue as Now we are ready to state our theorem of full-batch GGN: Theorem 1 (Quadratic Convergence of Full-batch GGN on Overparameterized Neural Networks). Assume Assumption 1 holds. Assume the scale of the data is, then with probability 1 − δ over the random initialization, the full-batch version of GGN whose update rule is given in Eq. satisfies the following: 1) The Gram matrix G t,S at each iteration is invertible; 2) The loss converges to zero in a way that for some C that is independent of M, which is a second-order convergence. For the mini-batch version of GGN, by the analysis of its NTK limit, the algorithm is essentially doing serial subspace correction on subspaces induced by mini-batch. So mini-batch GGN is similar to the Gauss-Siedel method applied to solving systems of linear equations, as shown in the proof of the following theorem. Similar to the full batch situation, GGN takes the exact solution of the "kernel regression problem on the subspace" which is faster than just doing a gradient step to optimize on the subspace. Moreover, we note that existing of the convergence of SGD on overparameterized networks usually use the idea that when the step size is bounded by a quantity related to smoothness, the SGD can be reduced to GD. However, our analysis takes a different way from the analysis of GD, thus does not rely on small step size. In the following, we denote G 0 ∈ R n×n as the initial Gram matrix. Let n = bk, where b is the batch size and k is the number of batches, and let where represents the block-diagonal and block-lower-triangular parts of G 0. We will show that the convergence of mini-batch GGN is highly related to the spectral radius of A. To simplify the proof, we make the following mild assumption on A: Assumption 2 (Assumption on the Iteration Matrix). Assume the matrix A defined in above is diagonalizable. So we choose an arbitary diagonalization of A as A = P −1 QP and denote We note that Assumption 2 is only for the sake of simplicity. Even if it does not hold, an infinitesimally small perturbation can make any matrix diagonalizable, and it will not affect the proof. Now we are ready to state the theorem for mini-batch GGN. Theorem 2 (Convergence of Mini-batch GGN on Overparameterized Neural Networks). Assume Assumption 1 and 2 hold. Assume the scale of the data is We use the mini-batch version of GGN whose update rule is given in Eq., and the batch B t is chosen sequentially and cyclically with a fixed batch size b and k = n/b updates per epoch. If the network width then with probability 1 − δ over the random initialization, we have the following: 1) The Gram matrix G t,Bt at each iteration is invertible; 2) The loss converges to zero in a way that after T epochs, we have Proof sketch for Theorem 1 and 2. Denote J t = J(W t) and For the full-batch version, we have Then we control the first term in Eq. in a way similar to the following: can be upper bounded, we get our Eq.. For the mini-batch version, similarly we have where the subscript B t denotes the sub-matrix/vector corresponding to the batch, andG is a zero-padded version of G −1 t,Bt to make Eq. hold. Therefore, after one epoch (from the (tk + 1)-th to the ((t + 1)k)-th update), we have We will see that the matrix A t is close to the matrix A defined in, so it boils down to analyzing the spectral properties of A. For both theorems, we can compute that as M increases, the norm of the update W t − W t+1 F does not increase with M, so the update is small compared to the Gaussian initialization where. From this we can derive that the matrices J, G etc. remain close to their initialization, which makes bounding their norms possible. The full proof is in the appendix. In , the accelerated convergence is related to the local linearity and the stability of the Jacobian and Gram matrix. We emphasize that our theorems serve more as a motivation than a justification of our GGN algorithm, because we expect that GGN works in practice, even under milder situations when M is not as large as the theorem demands or for deep networks with different architectures, and that GGN would still perform much better than first-order methods. We have proved that for sufficiently overparametrized deep neural networks, full-batch GGN has quadratic convergence rate. In this subsection, we analyze the per-iteration computational cost of GGN, and compare it to that of SGD. For every mini-batch (i.e., iteration), there are two major steps of computation in GGN: • (A). Forward, and then backpropagate for computing the Jacobian matrix J. • (B). Use J to compute the update J (λG + αI) We show that the computational complexity of (A) is the same as that of SGD with the same batch size; and the computational complexity of (B) is small compared to (A) for typical networks and batch sizes. Thus, the per-iteration computation overhead of GGN is very small compared to SGD. Overall, in terms of training time, GGN can be much faster than SGD. For the computation in step (A), the forward part is just the same as that of SGD. For the backward part, for every input data, GGN keeps track of the output's derivative for the nodes in the middle of the computational graph. This part is just the same as backpropagation in SGD. What is different is that GGN also, for every input data, keeps track of the output's derivative for the parameters; while in SGD the derivatives for the parameters are averaged over a batch of data. However, it is not difficult to see the computational costs of GGN and SGD are the same. For the computation in step (B), observe that the size of the Jacobian is b × m where b is the batch size and m is the number of parameters. The Gram matrix G t,Bt = J t,Bt J t,Bt in our GramGauss-Newton method is of size b × b and it only requires O(b 2 m + b 3) for computing G t,Bt and a matrix inverse. Multiplying the two matrices to f − y requires even less computation. Overall, the computational cost in step (B) is small compared to that of step (A). Given the theoretical findings above, in this section, we compare our proposed GGN algorithm with several baseline algorithms in real applications. In particular, we mainly study two regression tasks, AFAD-LITE and RSNA Bone Age (rsn, 2017). AFAD-LITE task is to predict the age of human from the facial information. The training data of the AFAD-LITE task contains 60k facial images and the corresponding age for each image. We choose ResNet-32 as the base model architecture. During training, all input images are resized to 64 * 64. We study two variants of the ResNet-32 architecture: ResNet-32 with batch normalization layer (referred to as ResNetBN), and ResNet-32 with Fixup initialization (b) (referred to as ResNetFixup). In both settings, we use SGD as our baseline algorithm. In particular, we follow to use its momentum variant and set the hyper-parameters lr=0.003 and momentum=0.9 determined by selecting the best optimization performance using grid search. Since batch normalization is computed over all samples within a mini-batch, it is not consistent with our assumption in Section 2 that the regression function has the form of f (w, x), which only depends on w and a single input datum x. For this reason, the GGN algorithm does not directly apply to ResNetBN, and we test our proposed algorithm on ResNetFixup only. We set λ = 1 and α = 0.3 for GGN. We follow the common practice to set the batch size to 128 for our proposed method and all baseline algorithms. Mean square loss is used for training. RSNA Bone Age task is a part of the 2017 Pediatric Bone Age Challenge organized by the Radiological Society of North America (RSNA). It contains 12,611 labeled images. Each image in this dataset is a radiograph of a left hand labeled with the corresponding bone age. During training, all input images are resized to 64 * 64. We also choose ResNetBN and ResNetFixup for this experiment, and use ResNetBN and ResNetFixup trained in the first task as warm-start initialization. We use lr= 0.01 and momentum= 0.9 for SGD, and use λ = 1 and α = 0.1 for GGN. Batch size is set to 128 in these experiments, and mean square loss is used for training. Convergence. The training loss curves of different optimization algorithms for AFAD-LITE and RSNA Bone Age tasks are shown in Figure 1. On both tasks, our proposed method converges much faster than the baselines. We can see from Figure 1a and Figure 1b that, on the AFAD-LITE task, the loss using our GGN method quickly decreases to nearly zero in 30 epochs. On the contrary, for both baselines using SGD, the loss decays much slower than our method in terms of wall clock time and epochs. Similar advantage of GGN can also be observed on the RSNA bone age task. Generalization performance and different hyper-parameters. We can see that our proposed method trains much faster than other baselines. However, as a machine learning model, generalization performance also needs to be evaluated. Due to space limitation, we only provide the test curve for the RSNA Bone Age task in Figure 2a. From the figure, we can see that the test loss of our proposed method also decreases faster than the baseline methods. Furthermore, the loss of our GGN algorithm is lower than those of the baselines. These show that the GGN algorithm can not only accelerate the whole training process, but also learn better models. We then study the effect of hyper-parameters used in the GGN algorithm. We try different λ and α on the RSNA Bone Age task and report the training loss of all experiments at the 10 th epoch. All are plotted in Figure 2c. In the figure, the x-axis is the value of λ and the y-axis is the value of α. The gray value of each point corresponds to the loss, the lighter the color, the higher the loss. We can see that the model converges faster when λ is close to 1. In GGN, α can be considered as the inverse value of the learning rate in SGD. Empirically, we find that the convergence speed of training loss is not that sensitive to α given a proper λ, such as λ = 1. Some training loss curves of different hyper-parameter configurations are shown in Figure 2b. We propose a novel Gram-Gauss-Newton (GGN) method for solving regression problems with square loss using overparameterized neural networks. Despite being a second-order method, the computation overhead of the GGN algorithm at each iteration is small compared to SGD. We also prove that if the neural network is sufficiently wide, GGN algorithm enjoys a quadratic convergence rate. Experimental on two regression tasks demonstrate that GGN compares favorably to SGD on these data sets with standard network architectures. Our work illustrates that second-order methods have the potential to compete with first-order methods for learning deep neural networks with huge number of parameters. In this paper, we mainly focus on the regression task, but our method can be easily generalized to other tasks such as classification as well. Consider the k-category classification problem, the neural network outputs a vector with k entries. Although this will increase the computational complexity of getting the Jacobian whose size increases k times, i.e., J ∈ R (bk)×m, each row of J can be still computed in parallel, which means the extra cost only comes from parallel computation overhead when we calculate in a fully parallel setting. While most first-order methods for training neural networks can hardly make use of the computational resource in parallel or distributed settings to accelerate training, our GGN method can exploit this ability. For first-order methods, basically extra computational resource can only be used to calculate more gradients at a time by increasing batch size, which harms generalization a lot. But for GGN, more resource can be used to refine the gradients and achieve accelerated convergence speed with the help of second-order information. It is an important future work to study the application of GGN to classification problems. Notations. We use the following notations for the rest of the sections. • Let [n] = {1, · · ·, n}. • J W,x ∈ R M ×d denotes the gradient ∂W, which is of the same size of W. • The bold J W or J(W) denotes the Jacobian with regard to all n data, with each gradient for W vectorized, i.e. • w r denotes the r-th row of W, which is the incoming weights of the r-th neuron. • W 0 stands for the parameters at initialization. • d W,x:= σ (Wx) ∈ R M ×1 denotes the (entry-wise) derivative of the activation function. • We use ·, · to denote the inner product, · 2 to denote the Euclidean norm for vectors or the spectral norm for matrices, and · F to denote the Frobenius norm for matrices. where • is the point-wise product. So we can also easily solve G as Our analysis is based on the fact that G stays not too far from its infinite-width limit which is a positive definite matrix with least eigenvalue denoted λ 0, and we assume λ 0 > 0. λ 0 is a small data-dependent constant, and without loss of generality we assume λ 0 ≤ 1, or else we can just The first lemma is about the estimation of relevant norms at initialization. Lemma 2 (Bounds on Norms at Initialization). If M = Ω (d log(16n/δ)), then with probability at least 1 − δ/2 the following holds (c).). Notice that W 0 ∈ R M ×d is a Gaussian random matrix, the Corollary states that with probability 1 − 2e By choosing M = max(d, 2 log(8/δ)), we obtain W 0 2 ≤ 3 √ M with probability 1 − δ/4. (b). First, a r, r ∈ [M] are Rademacher variables, thereby 1-sub-Gaussian, so with probability 1 − 2e This means if we take M = Ω (log(16/δ)), Next, the vector is a standard Gaussian vector. Suppose the activation σ(·) is l-Lipschitz and l is O by our assumption, with the vector a fixed, the function According to the classic on the concentration of a Lipschitz function over Gaussian variables (see Theorem 2.26 of), we have holds jointly for all i ∈ [n] with probability 1 − δ/8. Note that Plugging in and into, we see that as long as M = Ω(log(16n/δ)), then with probability According to we can easily know that The next lemma is about the least eigenvalue of the Gram matrix G at initialization. It shows that when M is large, G W0 is close to K and has a lower bounded least eigenvalue. It is the same as Lemma 3.1 in Du et al. (2018b), but here we restate it and its proof for the reader's convenience., then with probability at least 1 − δ/2 over random initialization, we have Proof. Because σ is Lipschitz, σ (wx i)σ (wx j) is bounded by O. For every fixed (i, j) pair, at initialization G ij is an average of independent random variables, and by Hoeffding's inequality, applying union bound for all n 2 of (i, j) pairs, with probability 1 − δ/2 at initialization we have and then Next, in Lemma 4 and 5 we will bound the relevant norms and the least eigenvalue of G inside some scope of W that covers the whole optimization trajectory starting from W 0. Specifically, we consider the range where R is determined later to make sure that the optimization trajectory remains inside B(R). The idea of the whole convergence theorem is that when the width M is large, R is very small compared to its initialization scale:. This way, neither the Jacobian nor the Gram matrix changes much during optimization. Lemma 4 (Bounds on Norms in the Optimization Scope). Suppose the events in Lemma 2 hold. There exists a constant C > 0 such that if M ≥ CR 2, we have the following: (a) For any W ∈ B(R), we have Also, for any W ∈ B(R), we have Proof. (a). This is straightforward from Lemma 2(a), the definition of B(R), and M = Ω(R 2). (b). According to the O-smoothness of the activation, we have so we can bound And according to, we have Also, taking W 1 = W and W 2 = W 0, combining with Lemma 2(c), we see there exists C such that for M ≥ CR 2 we have J W F = O, and naturally, we have and thus combined with Lemma 3, we know that G W remains invertible when W ∈ B(R) and satisfies Proof. Based on the in Lemma 4(b), we have To make the above less than Proof idea. In this section, we use W t, t ∈ {0, 1, · · ·} to represent the parameter W after t iterations. For convenience, J t, G t, f t is short for J Wt, G Wt, f Wt respectively. We introduce For each iteration, if G t is invertible, we have Then we control the first term of the right hand side based on Lemma 4 in the following form and plugging into along with norm bounds on J and G we obtain a second-order convergence. Formal proof. Let R t = W t − W t+1 F for t ∈ {0, 1, · · ·}. We take R = Θ n λ0 in Lemma 4 and 5 (the constant is chosen to make the right hand side of hold). We prove that there exists an M = Ω max (with enough constant) that suffices. First we can easily verify that all the requirements for M in Lemma 2-5 can be satisfied. Hence, with probability at least 1 − δ all the events in Lemma 2-5 hold. Under this situation, we do induction on t to show the following: • (a). W t ∈ B(R). • As long as (b) is true for all t, then choosing M large enough to make sure the series {f t − y 2} ∞ t=0 converges to zero, we obtain the second-order convergence property. For t = 0, (a) and (b) hold by definition. Suppose the proposition holds for t = 0, · · ·, T. Then for t = 0, · · ·, T, G t is invertible. Recall that the update rule is vec((Lemma 4 and 5) According to Lemma 2(b) and the assumption that the target label y i = O, we have f 0 − y 2 2 = O(n). When T > 1, the decay ratio at the first step is bounded as ) with enough constant can make sure r is a constant less than 1, in which case the second-order convergence property (b) will ensure a faster ratio of decay at each subsequent step in f t − y T t=0. Combining, we have for variables r ∈ R n×1 (where vec(W) = vec(W 0) + J r), using the Gauss-Siedel method, which means solving for the i-th batch in every epoch. Therefore, it is natural that the matrix is introduced. We will show later that the real update follows In order to prove the theorem, we need some additional lemmas as follows. Lemma 6 (Formula for the Update). If the Gram matrix at each step is invertible, then: (b) The formula for f − y is (c) The update of f − y satisfies where Or, if we denote then we have Specifically, we have where Proof. (a) For, this is exactly the update formula for the i-th batch where the Jacobian and Gram matrix are J ti,i and G ti,ii respectively. Note that), we obtain. (c). Based on we know that where the index goes in decreasing order from left to right. So in order to prove we only need to prove that which we will prove by induction on i. For i = 0 it is trivial that D t − L t = U t0 by definition. Suppose holds for i, then... which proves. Note that by the definition, we have U ti = D ti − L ti, we can then obtain by By, we can see that the iteration matrix A t is close to the matrix A defined in. The convergence of the algorithm is much related to the eigenvalues of A. In the next two lemmas, we bound the spectral radius of A and provide an auxiliary on the convergence on perturbed matrices based on the spectral radius. Lemma 7. Suppose the least eigenvalue of the initial Gram matrix G 0 satisfies λ min (G 0) ≥ 3 4 λ 0, (which is true with probability 1 − δ if M = Ω n 2 log(2n/δ) λ 2 0, according to Lemma 3). Also assume J W0,x l F = O for any l ∈ [n] (which is true with probability 1 − δ, according to Lemma 2). Then the spectral radius A, or equivalently, maximum norm of eigenvalues of A, denoted ρ(A), satisfies Proof. For any eigenvalue λ ∈ C of A, it is an eigenvalue of A, so there exists a corresponding where. It is not hard to see that Also, since by our assumption each entry of L (or of G 0) is at most O, we have Now we use take an inner product of v with and get and therefore This concludes the proof for this lemma. Lemma 8. Denote ρ(A) = ρ 0 ≤ 1. Let A be diagonalized as A = P −1 QP and µ = P 2 P −1 2 (see Assumption 2). Suppose we have A t − A 0 2 ≤ δ for t ≤ T, then In addition to the bounds used in Theorem 1, we provide the next lemma with useful bounds of the norms and eigenvalues of relevant matrices in the optimization scope Lemma 9 (Relevant Bounds for the Matrices in the Optimization Scope). We assume the events in Lemma 2 and 3 hold, and let M = for some large enough constant C, then: For any (t, i)-pair, we assume W t i ∈ B(R) for all i ∈ [k] when t ≤ t and i ∈ [i] when t = t in the following propositions.. Suppose up to t, W t is in B(R) (which means for i ∈ [k + 1], and t ∈ [t − 1], W ti ∈ B(R)). (By the positive-definiteness of G 0 and Lemma 3) (b). By Lemma 4 we know that within B(R), the size of the Jacobian J x l F w.r.t. data x l is O,, this can be applied to each entry of D, L, U, etc., including those J t (i,i +1) terms by the convexity of B(R), and we can see that each entry of these matrices has size at most O and varies inside an O R √ M range. Therefore we get and suffices. With all the preparations, now we are ready to prove Theorem 2. The logic of the proof is just the same as what we did in the formal proof of Theorem 1, where we then used induction on t and now we use induction on the pair (t, i). The key, is still selecting some R so that in each step W ti remains in B(R). Combined with the previous Lemmas, in B(R) we have A t being close to A, and then convergence is guaranteed. Formal proof of Theorem 2. Let R ti = W t(i+1) − W ti F. We take in the range B(R) (where the constant is chosen to make the right hand side of hold). We prove that there exists an with a large enough constant that suffices. First we can easily verify that all the requirements for M in Lemma 2-9 (most importantly, Lemma 9) can be satisfied. Hence, with probability at least 1 − δ all the events in Lemma 2-9 hold. Under this situation, we do induction on (t, i) (t ∈ {1, 2, · · ·}, i ∈ [k], by dictionary order) to show that: For (t, i) =, it holds by definition. Suppose the proposition holds up to (t, i). Then since W ti ∈ B(R), by Lemma 4 we know that 2. This naturally gives us λ min (G ti,ii) ≥ λ0 2, which means G ti,ii is invertible. Similar to the argument in the proof of Lemma 9, we know that each entry of J ti, J t(i,i+1), D ti, L ti, U ti, etc., whose index of (t, i) only contains i with i ≤ i, is of scale Based on the update rule, we have Since this also hold for previous (t, i) pairs, we have which is the reason why we need to take R = Θ. This means that W ti ∈ B(R) holds. And by induction, we have proved that W remains in B(R) throughout the optimization process. The last thing to do is to bound f t − y. By the same logic from above, we have, which proves our theorem. In this section, we give test performance curve of AFAD-LITE dataset in Figure 3 under the same setting with Section 4. In addition, we provide more baseline , e.g. Adam and K-FAC on RSNA Bone Age dataset. Since we find that, as another alternation of BN, Group Normalization (GN) can largely improve the performance of Adam, we also implement the GN layer for our GGN method. We use grid search to obtain approximately best hyper-parameters for every experiment. All experiments are performed with batch size 128, input size 64*64 and weight decay 10 −4. We set the number of groups to 8 for ResNetGN. Other hyper-parameters are listed below. • SGD+ResNetBN: learning rate 0.01, momentum 0.9. • Adam+ResNetBN: learning rate 0.001. • SGD+ResNetGN: learning rate 0.002, momentum 0.9. • Adam+ResNetGN: learning rate 0.0005. • K-FAC+ResNet: learning rate 0.02, momentum 0.9, = 0.1, update frequency 100. • GGN+ResNetGN: λ = 1, α = 0.075. The convergence are summarized in Figure 4. Note to make comparison clearer, we use logarithmic scale for training curves. | A novel Gram-Gauss-Newton method to train neural networks, inspired by neural tangent kernel and Gauss-Newton method, with fast convergence speed both theoretically and experimentally. | 653 | scitldr |
Recent pretrained sentence encoders achieve state of the art on language understanding tasks, but does this mean they have implicit knowledge of syntactic structures? We introduce a grammatically annotated development set for the Corpus of Linguistic Acceptability (CoLA;), which we use to investigate the grammatical knowledge of three pretrained encoders, including the popular OpenAI Transformer and BERT . We fine-tune these encoders to do acceptability classification over CoLA and compare the models’ performance on the annotated analysis set. Some phenomena, e.g. modification by adjuncts, are easy to learn for all models, while others, e.g. long-distance movement, are learned effectively only by models with strong overall performance, and others still, e.g. morphological agreement, are hardly learned by any model. The effectiveness and ubiquity of pretrained sentence embeddings for natural language understanding has grown dramatically in recent years. Recent sentence encoders like OpenAI's Generative Pretrained Transformer (GPT;) and BERT achieve the state of the art on the GLUE benchmark . Among the GLUE tasks, these stateof-the-art systems make their greatest gains on the acceptability task with the Corpus of Linguistic Acceptability (CoLA;). CoLA contains example sentences from linguistics publications labeled by experts for grammatical acceptability, and written to show subtle grammatical features. Because minimal syntactic differences can separate acceptable sentences from unacceptable ones (What did Bo write a book about? / *What was a book about written by Bo?), and acceptability classifiers are more reliable when trained on GPT and BERT than on recurrent models, it stands to reason that GPT and BERT have better implicit knowledge of syntactic features relevant to acceptability. Our goal in this paper is to develop an evaluation dataset that can locate which syntactic features that a model successfully learns by identifying the syntactic domains of CoLA in which it performs the best. Using this evaluation set, we compare the syntactic knowledge of GPT and BERT in detail, and investigate the strengths of these models over the baseline BiLSTM model published by. The analysis set includes expert annotations labeling the entire CoLA development set for the presence of 63 fine-grained syntactic features. We identify many specific syntactic features that make sentences harder to classify, and many that have little effect. For instance, sentences involving unusual or marked argument structures are no harder than the average sentence, while sentences with long distance dependencies are hard to learn. We also find features of sentences that accentuate or minimize the differences between models. Specifically, the transformer models seem to learn long-distance dependencies much better than the recurrent model, yet have no advantage on sentences with morphological violations. Sentence Embeddings Robust pretrained word embeddings like word2vec and GloVe have been extemely successful and widely adopted in machine learning applications for language understanding. Recent research tries to reproduce this success at the sentence level, in the form of reusable sentence embeddings with pretrained weights. These rep- Table 1: A random sample of sentences from the CoLA development set, shown with their original acceptability labels (= acceptable, *=unacceptable) and with a subset of our new phenomenon-level annotations.resentations are useful for language understanding tasks that require a model to classify a single sentence, as in sentiment analysis and acceptability classification; or a pair of sentences, as in paraphrase detection and natural language inference (NLI); or that require a model to generate text based on an input text, as in question-answering. Early work in this area primarily uses recurrent models like Long Short-Term Memory (, LSTM) networks to reduce variable length sequences into fixed-length sentence embeddings. Current state of the art sentence encoders are pretrained on language modeling or related tasks with unlabeled-data. Among these, ELMo uses a BiLSTM architecture, while GPT and BERT use the Transformer architecture . Unlike most earlier approaches where the weights of the encoder are frozen after pretraining, the last two fine-tune the encoder on the downstream task. With additional fine-tuning on secondary tasks like NLI, these are the top performing models on the GLUE benchmark . The evaluation and analysis of sentence embeddings is an active area of research. One branch of this work uses probing tasks which can reveal how much syntactic information a sentence embedding encodes about, for instance, tense and voice , sentence length and word content , or syntactic depth and morphological number .Related work indirectly probes features of sentence embeddings using language understanding tasks with custom datasets manipulating specific grammatical features. uses several tasks including acceptability classification of sentences with manipulated verbal inflection to investigate whether LSTMs can identify violations in subject-verb agreement, and therefore a (potentially long distance) syntactic dependency. test whether sentence embeddings encode the scope of negation and semantic roles using semi-automatically generated sentences exhibiting carefully controlled syntactic variation. also semiautomatically generate data and use acceptability classification to test whether word and sentence embeddings encode information about verbs and their argument structures. CoLA & Acceptability Classification The Corpus of Linguistic Acceptability is a dataset of 10k example sentences including expert annotations for grammatical acceptability. The sentences are example sentences taken from 23 theoretical linguistics publications, mostly about syntax, including undergraduate textbooks, research articles, and dissertations. Such example sentences are usually labeled for acceptability by their authors or a small group of native English speakers. A small random sample of the CoLA development set (with our added annotations) can be seen in Table 1.Within computational linguistics, the acceptability classification task has been explored in var-ious settings. train RNNs to do acceptability classification over sequences of POS tags corresponding to example sentences from a syntax textbook. also train RNNs, but using naturally occurring sentences that have been automatically manipulated to be unacceptable. predict acceptability from language model probabilities, applying this technique to sentences from a syntax textbook, and sentences which were translated round-trip through various languages. Lau et al. attempt to model gradient crowdsourced acceptability judgments, rather than binary expert judgments. This reflects an ongoing debate about whether binary expert judgments like those in CoLA are reliable . We remain agnostic as to the role of binary judgments in linguistic theory, taking the expert judgments in CoLA at face value. measure human performance on a subset of CoLA (see TAB5), finding that new human annotators, while not in perfect agreement with the judgments in CoLA, still outperform the best neural network models by a wide margin. We introduce a grammatically annotated version of the entire CoLA development set to facilitate detailed error analysis of acceptability classifiers. These 1043 sentences are expert-labeled for the presence of 63 minor grammatical features organized into 15 major features. Each minor feature belongs to a single major feature. A sentence belongs to a major feature if it belongs to one or more of the relevant minor features. The Appendix includes descriptions of each feature along with examples and the criteria used for annotation. The 63 minor features and 15 major features are illustrated in TAB1. Considering minor features, an average of 4.31 features is present per sentence (SD=2.59). The average feature is present in 71.3 sentences (SD=54.7). Turning to major features, the average sentence belongs to 3.22 major features (SD=1.66), and the average major feature is present in 224 sentences (SD=112). Every sentence is labeled with at least one feature. The sentences were annotated manually by one of the authors, who is a PhD student with extensive training in formal linguistics. The features were developed in a trial stage, in which the annotator performed a similar annotation with different annotation schema for several hundred sentences from CoLA not belonging to the development set. Here we briefly summarize the feature set in order of the major features. Many of these constructions are well-studied in syntax, and further can be found in textbooks such as BID0 and.Simple This major feature contains only one minor feature, SIMPLE, including sentences with a syntactically simplex subject and predicate. Pred(icate) These three features correspond to predicative phrases, including copular constructions, small clauses (I saw Bo jump), and atives/depictives (Bo wiped the table clean).Adjunct These six features mark various kinds of optional modifiers. This includes modifiers of NPs (The boy with blue eyes gasped) or VPs (The cat meowed all morning), and temporal (Bo swam yesterday) or locative (Bo jumped on the bed).Argument types These five features identify syntactically selected arguments, differentiating, for example, obliques (I gave a book to Bo), PP arguments of NPs and VPs (Bo voted for Jones), and expletives (It seems that Bo left).Argument Alternations These four features mark VPs with unusual argument structures, including added arguments (I baked Bo a cake) or dropped arguments (Bo knows), and the passive (I was applauded).Imperative This contains only one feature for imperative clauses (Stop it!).Bind These are two minor features, one for bound reflexives (Bo loves himself), and one for other bound pronouns (Bo thinks he won).Question These five features apply to sentences with question-like properties. They mark whether the interrogative is an embedded clause (I know who you are), a matrix clause (Who are you?), or a relative clause (Bo saw the guy who left); whether it contains an island out of which extraction is unacceptable (*What was a picture of hanging on the wall?); or whether there is pied-piping or a multiword wh-expressions (With whom did you eat?). S-Syntax These seven features mark various unrelated syntactic constructions, including dislocated phrases (The boy left who was here earlier); movement related to focus or information structure (This I've gotta see); coordination, subordinate clauses, and ellipsis (I can't); or sentencelevel adjuncts (Apparently, it's raining).Determiner These four features mark various determiners, including quantifiers, partitives (two of the boys), negative polarity items (I *do/don't have any pie), and comparative constructions. Violations These three features apply only to unacceptable sentences, and only ones which are ungrammatical due to a semantic or morphological violation, or the presence or absence of a single salient word. We wish to emphasize that these features are overlapping and in many cases are correlated, thus not all from using this analysis set will be independent. We analyzed the pairwise Matthews Correlation Coefficient (MCC;) of the 63 minor features (giving 1953 pairs), and of the 15 major features (giving 105 pairs). MCC is a special case of Pearson's r for Boolean variables. 1 These are summarized in TAB3. Regarding the minor features, 60 pairs had a correlation of 0.2 or greater, 17 had a correlation of 0.4 or greater, and 6 had a correlation of 0.6 or greater. None had an anti-correlation of greater magnitude than -0.17. Turning to the major features, 6 pairs had a correlation of 0.2 or greater, and 2 had an anti-correlation of greater magnitude than -0.2.We can see at least three reasons for these observed correlations. First, some correlations can be attributed to overlapping feature definitions. For instance, EXPLETIVE arguments (e.g. There are birds singing) are, by definition, non-canonical arguments, and thus are a subset of ADD ARG. However, some added arguments, such as benefactives (Bo baked Mo a cake), are not expletives. Second, some correlations can be attributed to grammatical properties of the relevant constructions. For instance, QUESTION and AUX are correlated because main-clause questions in English require subject-aux inversion and in many cases the insertion of auxiliary do (Do lions meow?). Third, some correlations may be a consequence of the sources sampled in CoLA and the phenomena they focus on. For instance, the unusually high correlation of EMB-Q and ELLIPSIS/ANAPHOR can be attributed to , which is an article about the sluicing construction involving ellipsis of an embedded interrogative (e.g. I saw someone, but I don't know who).Finally, two strongest anti-correlations between major features are between SIMPLE and the two features related to argument structure, ARGU-MENT TYPES and ARG ALTERN. This follows from the definition of SIMPLE, which excludes any sentence containing a large number or unusual configuration of arguments. We train MLP acceptability classifiers for CoLA on top of three sentence encoders: the CoLA baseline encoder with ELMo-style embeddings, OpenAI GPT, and BERT. We use publicly available sentence encoders with pretrained weights. 2 LSTM encoder: CoLA baseline The CoLA baseline model is the sentence encoder with the highest performance on CoLA from Warstadt et al. The encoder uses a BiLSTM, which reads the sentence word-by-word in both directions, with maxpooling over the hidden states. Similar to ELMo , the inputs to the BiLSTM are the hidden states of a language model (only a forward language model is used in contrast with ELMo). The encoder is trained on a real/fake discrimination task which requires it to identify whether a sentence is naturally occurring or automatically generated. We train acceptability classifiers on CoLA using the CoLA baselines codebase with 20 random restarts, following the original authors' transfer-learning approach: The sentence encoder's weights are frozen, and the sentence embedding serves as input to an MLP with a single hidden layer. All hyperparameters are held constant across restarts. Transformer encoders: GPT and BERT In contrast with recurrent models, GPT and BERT use a self attention mechanism which combines representations for each (possibly non-adjacent) pair of words to give a sentence embedding. GPT is trained using a standard language modeling task, while BERT is trained with masked language modeling and next sentence prediction tasks. For each encoder, we use the jiant toolkit 3 to train 20 random restarts on CoLA feeding the pretrained models published by these authors into a single output layer. Following the methods of the original authors, we fine-tune the encoders during training on CoLA. All hyperparameters are held constant across restarts. The overall performance of the three sentence encoders is shown in TAB5. Performance on CoLA is measured using MCC . We present the best single restart for each encoder, the mean over restarts for an encoder, and the of ensembling the restarts for a given encoder, i.e. taking the majority classification for a given sentence, or the majority label of acceptable if tied. 4 For BERT , we exclude 5 out of the 20 restarts because they were degenerate (MCC=0). Across the board, BERT outperforms GPT, which outperforms the CoLA baseline. However, BERT and GPT are much closer in performance than they are to CoLA baseline. While ensemble performance exceeded the average for BERT and GPT, it did not outperform the best single model. The for the major features and minor features are shown in FIG1, respectively. For each feature, we measure the MCC of the sentences including that feature. We plot the mean of these across the different restarts for each model, and error bars mark the mean ±1 standard deviation. For the VIOLATIONS features, MCC is technically undefined because these features only contain unacceptable sentences. We report MCC in these cases by including for each feature a single acceptable example that is correctly classified by all models. Comparison across features reveals that the presence of certain features has a large effect on performance, and we comment on some overall patterns below. Within a given feature, the effect of model type is overwhelmingly stable, and resembles the overall difference in performance. However, we observe several interactions, i.e. specific features where the relative performance of models does not track their overall relative performance. Comparing Features Among the major features FIG1, performance is universally highest on the SIMPLE sentences, and is higher than each model's overall performance. Though these sentences are simple, we notice that the proportion of ungrammatical ones is on par with the entire dataset. Otherwise we find that a model's performance on sentences of a given feature is on par with or lower than its overall performance, reflecting the fact that features mark the presence of unusual or complex syntactic structure. Performance is also high (and close to overall performance) on sentences with marked argument structures (ARGUMENT TYPES and ARG(UMENT) ALT(ERNATION)). While these models are still worse than human (overall) per- formance on these sentences, this indicates that argument structure is relatively easy to learn. Comparing different kinds of embedded content, we observe higher performance on sentences with embedded clauses (major feature=COMP CLAUSE) embedded VPs (major feature=TO-VP) than on sentences with embedded interrogatives (minor features=EMB-Q, REL CLAUSE). An exception to this trend is the minor feature NO C-IZER, which labels complement clauses without a complementizer (e.g. I think that you're crazy). Low performance on these sentences compared to most other features in COMP CLAUSE might indicate that complementizers are an important syntactic cue for these models. As the major feature QUESTION shows, the difficulty of sentences with question-like syntax applies beyond just embedded questions. Excluding polar questions, sentences with question-like syntax almost always involve extraction of a wh-word, creating a long-distance dependency between the wh-word and its extraction site, which may be difficult for models to recognize. The most challenging features are all related to VIOLATIONS. Low performance on INFL/AGR VIOLATIONS, which marks morphological violations (He washed yourself, This is happy), is especially striking because a relatively high proportion (29%) of these sentences are SIMPLE. These models are likely to be deficient in encoding morphological features is that they are word level models, and do not have direct access sub-word information like inflectional endings, which indicates that these features are difficult to learn effectively purely from lexical distributions. Finally, unusual performance on some features is due to small samples, and have a high standard deviation, suggesting the is unreliable. This includes CP SUBJ, FRAG/PAREN, IMPERATIVE, NPI/FCI, and COMPARATIVE.Comparing Models Comparing within-feature performance of the three encoders to their overall performance, we find they have differing strengths and weaknesses. BERT stands out over other models in DEEP EMBED, which includes challenging sentences with doubly-embedded, as well as in several features involving extraction (i.e. longdistance dependencies) such as VP+EXTRACT and INFO-STRUC. The transformer models show evidence of learning long-distance dependencies better than the CoLA baseline. They outperform the CoLA baseline by an especially wide margin on BIND:REFL, which all involves establishing a dependency between a reflexive and its antecedent (Bo tries to love himself). They also have a large advantage in DISLOCATION, in which expressions are separated from their dependents (Bo practiced on the train an important presentation). The advantage of BERT and GPT may be due in part to their use of the transformer architecture. Unlike the BiLSTM used by the CoLA baseline, the transformer uses a self-attention mechanism that associates all pairs of words regardless of distance. In some cases models showed surprisingly good or bad performance, revealing possible idiosyncrasies of the sentence embeddings they output. For instance, the CoLA baseline performs on par with the others on the major feature ADJUNCT, especially considering the minor feature PARTICLE (Bo looked the word up).Furthermore, all models struggle equally with sentences in VIOLATION, indicating that the advantages of the transformer models over the CoLA baseline does not extend to the detection of morphological violations (INFL/AGR VIOLATION) or single word anomalies (EXTRA/MISSING EXPR). For comparison, we analyze the effect of sentence length on acceptability classifier performance. The are shown in FIG3. The for the CoLA baseline are inconsistent, but do drop off as sentence length increases. For BERT and GPT, performance decreases very steadily with length. Exceptions are extremely short sentences (length 1-3), which may be challenging due to insufficient information; and extremely long sentences, where we see a small (but somewhat unreliable) boost in BERT's performance. BERT and GPT are generally quite close in performance, except on the longest sentences, where BERT's performance is considerably better. Using a new grammatically annotated analysis set, we identify several syntactic phenomena that are predictive of good or bad performance of current state of the art sentence encoders on CoLA. We also use these to develop hypotheses about why BERT is successful, and why transformer models outperform sequence models. Our findings can guide future work on sentence embeddings. A current weakness of all sentence encoders we investigate, including BERT, is the identification of morphological violations. Future engineering work should investigate whether switching to a character-level model can mitigate this problem. Additionally, transformer models appear to have an advantage over sequence models with long-distance dependencies, but still struggle with these constructions relative to more local phenomena. It stands to reason that this performance gap might be widened by training larger or deeper transformer models, or training on longer or more complex sentences. This analysis set can be used by engineers interested in evaluating the syntactic knowledge of their encoders. Finally, these findings suggest possible controlled experiments that could confirm whether there is a causal relation between the presence of the syntactic features we single out as interesting and model performance. Our are purely correlational, and do not mark whether a particular construction is crucial for the acceptability of the sentence. Future experiments following and can semi-automatically generate datasets manipulating, for example, length of long-distance dependencies, inflectional violations, or the presence of interrogatives, while controlling for factors like sentence length and word choice, in order determine the extent to which these features impact the quality of sentence embeddings. Included a. John owns the book. b. Park Square has a festive air. c. *Herself likes Mary's mother. FORMULA0 Excluded a. Bill has eaten cake. b. I gave Joe a book. A. These are sentences including the verb be used predicatively. Also, sentences where the object of the verb is itself a predicate, which applies to the subject. Not included are auxiliary uses of be or other predicate phrases that are not linked to a subject by a verb. These sentences involve predication of a nonsubject argument by another non-subject argument, without the presence of a copula. Some of these cases may be analyzed as small clauses. (see , pp. 189-193) These are adjuncts modifying noun phrases. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. Single-word prenominal adjectives are excluded, as are relative clauses (this has another category). These are adjuncts of VPs and NPs that specify a time or modify tense or aspect or frequency of an event. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. These are adjuncts of VPs and NPs not described by some other category (with the exception of), i.e. not temporal, locative, or relative clauses. Adjuncts are (usually) optional, and they do not change the category of the expression they modify. Prepositional Phrase arguments of NPs or APs are individual-denoting arguments of a noun or adjective which are marked by a proposition. Arguments are selected for by the head, and they are (generally) not optional, though in some cases they may be omitted where they are understood or implicitly existentially quantified over. Prepositional arguments introduced with by. Usually, this is the (semantic) subject of a passive verb, but in rare cases it may be the subject of a nominalized verb. Arguments are usually selected for by the head, and they are generally not optional. In this case, the argument introduced with by is semantically selected for by the verb, but it is syntactically optional. See Adger (2003, p.190) and. (22 Expletives, or dummy arguments, are semantically inert arguments. The most common expletives in English are it and there, although not all occurrences of these items are expletives. Arguments are usually selected for by the head, and they are generally not optional. In this case, the expletive occupies a syntactic argument slot, but it is not semantically selected by the verb, and there is often a syntactic variation without the expletive. See Adger (2003, p.170-172) and Kim and Sells (2008, p.82-83 The passive voice is marked by the demotion of the subject (either complete omission or to a byphrase) and the verb appearing as a past participle. In the stereotypical construction there is an auxiliary be verb, though this may be absent. See Kim and Sells (2008, p.175-190) et al. (2013, p.163-186) and Sag et al. (2003, p.203-226). A.7.2 Binding:Other (Binding of Other Pronouns) These are cases in which a non-reflexive pronoun appears along with its antecedent. This includes donkey anaphora, quantificational binding, and bound possessives, among other bound pronouns. See Sportiche et al. (2013, p.163-186) and Sag et al. (2003, p.203-226). These are sentences in which the matrix clause is interrogative (either a wh-or polar question). See Adger (2003, pp.282-213), Kim and Sells (2008, pp.193-222), and Carnie (2013, p.315-350 Relative clauses are noun modifiers appearing with a relativizer (either that or a wh-word) and an associated gap. See Kim and Sells (2008, p.223-244 | We investigate the implicit syntactic knowledge of sentence embeddings using a new analysis set of grammatically annotated sentences with acceptability judgments. | 654 | scitldr |
When considering simultaneously a finite number of tasks, multi-output learning enables one to account for the similarities of the tasks via appropriate regularizers. We propose a generalization of the classical setting to a continuum of tasks by using vector-valued RKHSs. Several fundamental problems in machine learning and statistics can be phrased as the minimization of a loss function described by a hyperparameter. The hyperparameter might capture numerous aspects of the problem: (i) the tolerance w. r. t. outliers as the -insensitivity in Support Vector Regression , (ii) importance of smoothness or sparsity such as the weight of the l 2 -norm in Tikhonov regularization , l 1 -norm in LASSO , or more general structured-sparsity inducing norms BID3, (iii) Density Level-Set Estimation (DLSE), see for example one-class support vector machines One-Class Support Vector Machine (OCSVM, Schölkopf et al., 2000), (iv) confidence as exemplified by Quantile Regression , or (v) importance of different decisions as implemented by Cost-Sensitive Classification . In various cases including QR, CSC or DLSE, one is interested in solving the parameterized task for several hyperparameter values. Multi-Task Learning provides a principled way of benefiting from the relationship between similar tasks while preserving local properties of the algorithms: ν-property in DLSE or quantile property in QR .A natural extension from the traditional multi-task setting is to provide a prediction tool being able to deal with any value of the hyperparameter. In their seminal work, extended multi-task learning by considering an infinite number of parametrized tasks in a framework called Parametric Task Learning (PTL). Assuming that the loss is piecewise affine in the hyperparameter, the authors are able to get the whole solution path through parametric programming, relying on techniques developed by.In this paper 1, we relax the affine model assumption on the tasks as well as the piecewise-linear assumption on the loss, and take a different angle. We propose Infinite Task Learning (ITL) within the framework of functionvalued function learning to handle a continuum number of parameterized tasks using Vector-Valued Reproducing Kernel Hilbert Space (vv-). After introducing a few notations, we gradually define our goal by moving from single parameterized tasks to ITL through multi-output learning. A supervised parametrized task is defined as follows. Let (X, Y) ∈ X × Y be a random variable with joint distribution P X,Y which is assumed to be fixed but unknown; we also assume that Y ⊂ R. We have access to n independent identically distributed (i. i. d.) observations called training samples: S:=((x i, y i)) n i=1 ∼ P ⊗n X,Y. Let Θ be the domain of hyperparameters, and v θ: Y × Y → R be a loss function associated to θ ∈ Θ. Let H ⊂ F (X ; Y) denote our hypothesis class; throughout the paper H is assumed to be a Hilbert space with inner product ·, · H. For a given θ, the goal is to estimate the minimizer of the expected risk DISPLAYFORM0 over H, using the training sample S. This task can be addressed by solving the regularized empirical risk minimization problem DISPLAYFORM1 where R θ S (h):= 1 n n i=1 v θ (y i, h(x i)) is the empirical risk and Ω: H → R is a regularizer. Below we give two examples. Quantile Regression: In this setting θ ∈. For a given hyperparameter θ, in Quantile Regression the goal is to predict the θ-quantile of the real-valued output conditional distribution P Y |X. The task can be tackled using the pinball loss defined in Eq.. DISPLAYFORM2 Density Level-Set Estimation: Examples of parameterized tasks can also be found in the unsupervised setting. For instance in outlier detection, the goal is to separate outliers from inliers. A classical technique to tackle this task is OCSVM (Schölkopf et al., 2000). OCSVM has a free parameter θ ∈, which can be proven to be an upper bound on the fraction of outliers. This unsupervised learning problem can be empirically described by the minimization of a regularized empirical risk R S θ (h, t) + Ω(h), solved jointly over h ∈ H and t ∈ R with DISPLAYFORM3 In the aforementioned problems, one is rarely interested in the choice of a single hyperparameter value (θ) and associated risk R S θ, but rather in the joint solution of multiple tasks. The naive approach of solving the different tasks independently can easily lead to inconsistencies. A principled way of solving many parameterized tasks has been cast as a MTL problem which takes into account the similarities between tasks and helps providing consistent solutions. For example it is possible to encode the similarities of the different tasks in MTL through an explicit constraint function . In the current work, the similarity between tasks is designed in an implicit way through the loss function and the use of a kernel on the hyperparameters. Moreover, in contrast to MTL, in our case the input space and the training samples are the same for each task; a task is specified by a value of the hyperparameter. This setting is sometimes refered to as multi-output learning (Álvarez et al., 2012).Formally, assume that we have p tasks described by parameters (θ j) p j=1. The idea of multi-task learning is to minimize the sum of the local loss functions R S θj, i. e.arg min DISPLAYFORM4 where the individual tasks are modelled by the real-valued h j functions, the overall R p -valued model is the vectorvalued function h: x → (h 1 (x),..., h p (x)), and Ω is a regularization term encoding similarities between tasks. Such approaches have been developed in for QR and in for DLSE.Learning a continuum of tasks: In the following, we propose a novel framework called Infinite Task Learning in which we learn a function-valued function h ∈ F (X ; F (Θ; Y)). Our goal is to be able to handle new tasks after the learning phase and thus, not to be limited to given predefined values of the hyperparameter. Regarding this goal, our framework generalizes the Parametric Task Learning approach introduced by , by allowing a wider class of models and relaxing the hypothesis of piece-wise linearity of the loss function. Moreover a nice byproduct of this vv-RKHS based approach is that one can benefit from the functional point of view, design new regularizers and impose various constraints on the whole continuum of tasks, e. g.,• The continuity of the θ → h(x)(θ) function is a natural desirable property: for a given input x, the predictions on similar tasks should also be similar.• Another example is to impose a shape constraint in QR: the conditional quantile should be increasing w. r. t. the hyperparameter θ. This requirement can be imposed through the functional view of the problem.• In DLSE, to get nested level sets, one would want that for all x ∈ X, the decision function θ → 1 R+ (h(x)(θ) − t(θ)) changes its sign only once. To keep the presentation simple, in the sequel we are going to focus on ITL in the supervised setting; unsupervised tasks can be handled similarly. Assume that h belongs to some space H ⊆ F (X ; F (Θ; Y)) and introduce an integrated loss function DISPLAYFORM5 where the local loss v: Θ × Y × Y → R denotes v θ seen as a function of three variables including the hyperparameter and µ is a probability measure on Θ which encodes the importance of the prediction at different hyperparameter values. Without prior information and for compact Θ, one may consider µ to be uniform. The true risk reads then DISPLAYFORM6 Intuitively, minimizing the expectation of the integral over θ in a rich enough space corresponds to searching for a pointwise minimizer x → h * (x)(θ) of the parametrized tasks introduced in Eq. with, for instance, the implicit space constraint that θ → h * (x)(θ) is a continuous function for each input x. We show in Proposition S.4.1 that this is precisely the case in QR.Interestingly, the empirical counterpart of the true risk minimization can now be considered with a much richer family of penalty terms: DISPLAYFORM7 Here, Ω(h) can be a weighted sum of various penalties as seen in Section 3. Many different models (H) could be applied to solve this problem. In our work we consider Reproducing Kernel Hilbert Spaces as they offer a simple and principled way to define regularizers by the appropriate choice of kernels and exhibit a significant flexibility. This section is dedicated to solving the ITL problem defined in Eq.. We first focus on the objective (V), then detail the applied vv-RKHS model family with various penalty examples, followed by representer theorems which give rise to computational tractability. In practice solving Eq. can be rather challenging due to the integral over θ. One might consider different numerical integration techniques to handle this issue. We focus here on Quasi Monte Carlo (QMC) methods as they allow (i) efficient optimization over vv-RKHSs which we will use for modelling H (Proposition 3.1), and (ii) enable us to derive generalization guarantees (Proposition 3.3). Indeed, let DISPLAYFORM0 be the QMC approximation of Eq.. Let w j = m −1 F −1 (θ j), and (θ j) m j=1 be a sequence with values in d such as the Sobol or Halton sequence where µ is assumed to be absolutely continuous w. r. t. the Lebesgue measure and F is the associated cdf. Using this notation and the training samples S = ((x i, y i)) n i=1, the empirical risk takes the form DISPLAYFORM1 and the problem to solve is DISPLAYFORM2 Hypothesis class (H): Recall that H ⊆ F (X ; F (Θ; Y)), in other words h(x) is a Θ → Y function for all x ∈ X. In this work we assume that the Θ → Y mapping can be described by an RKHS H kΘ associated to a k Θ: Θ × Θ → R scalar-valued kernel defined on the hyperparameters. Let k X: X × X → R be a scalar-valued kernel on the input space. The x → (hyperparameter → output) relation, i. e. h: X → H kΘ is then modelled by the Vector-Valued Reproducing Kernel Hilbert Spa- DISPLAYFORM3 where the operator-valued kernel K is defined as K(x, z) = k X (x, z)I, and I = I H k Θ is the identity operator on H kΘ.This so-called decomposable Operator-Valued Kernel has several benefits and gives rise to a function space with a well-known structure. One can consider elements h ∈ H K as mappings from X to H kΘ, and also as functions from (X × Θ) to R. It is indeed known that there is an isometry between H K and H k X ⊗ H kΘ, the RKHS associated to the product kernel k X ⊗ k Θ. The equivalence between these views allows a great flexibility and enables one to follow a functional point of view (to analyse statistical aspects) or to leverage the tensor product point of view (to design new kind of penalization schemes). Below we detail various regularizers before focusing on the representer theorems.• Ridge Penalty: For QR, a natural regularization is the squared vv-RKHS norm DISPLAYFORM4 This choice is amenable to excess risk analysis (see Proposition 3.3). It can be also seen as the counterpart of the classical (multi-task regularization term introduced by Sangnier et al. FORMULA0, compatible with an infinite number of tasks. · 2 H K acts by constraining the solution to a ball of a finite radius within the vv-RKHS, whose shape is controlled by both k X and k Θ .• L 2,1 -penalty: For DLSE, it is more adequate to apply an L 2,1 -RKHS mixed regularizer: DISPLAYFORM5 which is an example of a Θ-integrated penalty. This Ω choice allows the preservation of the θ-property (see Fig. S. 3), i. e. that the proportion of the outliers is θ.• Shape Constraints: Taking the example of QR it is advantageous to ensure the monotonicity of the estimated quantile function Let ∂ Θ h denotes the derivative of h(x)(θ) with respect to θ. Then one should solve arg min DISPLAYFORM6 However, the functional constraint prevents a tractable optimization scheme. To mitigate this bottleneck, we penalize if the derivative of h w. r. t. θ is negative: DISPLAYFORM7 When P:=P X this penalization can rely on the same anchors and weights as the ones used to approximate the integrated loss function: DISPLAYFORM8 Thus, one can modify the overall regularizer in QR to be DISPLAYFORM9 Representer Theorems: Apart from the flexibility of regularizer design, the other advantage of applying vv-RKHS as hypothesis class is that it gives rise to finite-dimensional representation of the ITL solution under mild conditions. Proposition 3.1 (Representer). Assume that for ∀θ ∈ Θ, v θ is a proper lower semicontinuous convex function with respect to its second argument. Then DISPLAYFORM10 with Ω(h) defined as in Eq. FORMULA0, has a unique solution h *, and DISPLAYFORM11 For DLSE, we similarly get a representer theorem with the following modelling choice. Let DISPLAYFORM12 2 Then, learning a continuum of level sets boils down to the minimization problem arg min DISPLAYFORM13 where DISPLAYFORM14 Remarks:• Relation to Joint Quantile Regression (JQR): In Infinite Quantile Regression (∞-QR), by choosing k Θ to be the (JQR) framework as a special case of our approach. In contrast to the JQR, however, in ∞-QR one can predict the quantile value at any θ ∈, even outside the (θ j) m j=1 used for learning. DISPLAYFORM15 • Relation to q-OCSVM: In DLSE, by choosing k Θ (θ, θ) = 1 (for all θ, θ ∈ Θ) to be the constant kernel, DISPLAYFORM16 δ θj, our approach specializes to q-OCSVM .• Relation to: Note that Operator-Valued Kernels for functional outputs have also been used in , under the form of integral operators acting on L 2 spaces. Both kernels give rise to the same space of functions; the benefit of our approach being to provide an exact finite representation of the solution (see Proposition 3.1).• Efficiency of the decomposable kernel: this kernel choice allows to rewrite the expansions in Propositions 3.1 and 3.2 as a Kronecker products and the complexity of the prediction of n points for m quantile becomes DISPLAYFORM17 Excess Risk Bounds: Below we provide a generalization error analysis to the solution of Eq. FORMULA10 for QR (with Ridge regularization and without shape constraints) by stability argument BID4, extending the work of BID2 to Infinite-Task Learning. The proposition (finite sample bounds are given in Corollary S.5.6) instantiates the guarantee for the QMC scheme. Proposition 3.3 (Generalization). Let h * ∈ H K be the solution of Eq. for the QR problem with QMC approximation. Under mild conditions on the kernels k X, k Θ and P X,Y, stated in the supplement, one has DISPLAYFORM18 (n, m) Trade-off: The proposition reveals the interplay between the two approximations, n (the number of training samples) and m (the number of locations taken in the integral approximation), and allows to identify the regime in λ = λ(n, m) driving the excess risk to zero. Indeed by choosing m = √ n and discarding logarithmic factors for simplicity, λ n −1 is sufficient. The mild assumptions imposed are: boundedness on both kernels and the random variable Y, as well as some smoothness of the kernels. Numerical Experiments: The efficiency of the ITL scheme for QR has been tested on several benchmarks; the are summarized in Table S.1 for 20 real datasets from the UCI repository. An additional experiment concerning the non-crossing property on a synthetic dataset can be found in Fig. S Let us recall the expression of the pinball loss: DISPLAYFORM19 Proposition S.4.1. Let X, Y be two random variables (r. v.s) respectively taking values in X and R, and q: X → F(, R) the associated conditional quantile function. Let µ be a positive measure on such that DISPLAYFORM20 where R is the risk defined in Eq..Proof. The proof is based on the one given in for a single quantile. Let f ∈ F (X ; F (; R)), θ ∈ and (x, y) ∈ X × R. Let also DISPLAYFORM21 Then, notice that DISPLAYFORM0 and since q is the true quantile function, DISPLAYFORM1 Moreover, (t − s) is negative when q(x)(θ) ≤ y ≤ h(x)(θ), positive when h(x)(θ) ≤ y ≤ q(x)(θ) and 0 otherwise, thus the quantity (t − s)(y − h(x)(θ)) is always positive. As a consequence, DISPLAYFORM2 There are several ways to solve the non-smooth optimization problems associated to the QR, DLSE and CSC tasks. One could proceed for example by duality-as it was done in -, or apply sub-gradient descent techniques (which often converge quite slowly). In order to allow unified treatment and efficient solution in our experiments we used the L-BFGS-B optimization scheme which is widely popular in large-scale learning, with non-smooth extensions (; Keskar & Wächter, 2017). The technique requires only evaluation of objective function along with its gradient, which can be computed automatically using reverse mode automatic differentiation (as in BID0). To benefit from from the available fast smooth implementations , we applied an infimal convolution on the non-differentiable terms of the objective. Under the assumption that m = O(√ n) (see Proposition 3.3), the complexity per L-BFGS-B iteration is O(n 2 √ n).The efficiency of the non-crossing penalty is illustrated in Fig. S.2 on a synthetic sine wave dataset where n = 40 and m = 20 points have been generated. Many crossings are visible on the right plot, while they are almost not noticible on the left plot, using the non-crossing penalty. Concerning our real-world examples (20 UCI datasets), to study the efficiency of the proposed scheme in quantile regression the following experimental protocol was applied. Each dataset was splitted randomly into a training set (70%) and a test set (30%). We optimized the hyperparameters by minimizing a 5-folds cross validation with a Bayesian optimizer 3. Once the hyperparameters were obtained, a new regressor was learned on the whole training set using the optimized hyperparameters. We report the value of the pinball loss and the crossing loss on the test set for three methods: our technique is called ∞-QR, we refer to's approach as JQR, and independent learning (abbreviated as IND-QR) represents a further baseline. We repeated 20 simulations (different random training-test splits); the are also compared using a Mann-WhitneyWilcoxon test. A summary is provided in Table S.1.Notice that while JQR is tailored to predict finite many quantiles, our ∞-QR method estimates the whole quantile function hence solves a more challenging task. Despite the more difficult problem solved, as Table S.1 suggest that the performance in terms of pinball loss of ∞-QR is comparable to that of the state-of-the-art JQR on all the twenty studied benchmarks, except for the'crabs' and'cpus' datasets (p.-val. < 0.25%). In addition, when considering the non-crossing penalty one can observe that ∞-QR outperforms the IND-QR baseline on eleven datasets (p.-val. < 0.25%) and JQR on two datasets. This illustrates the efficiency of the constraint based on the continuum scheme. The analysis of the generalization error will be performed using the notion of uniform stability introduced in BID4. For a derivation of generalization bounds in vv-RKHS, we refer to . In their framework, the goal is to minimize a risk which can be expressed as DISPLAYFORM0 where S = ((x 1, y 1),..., (x n, y n)) are i. i. d. inputs and λ > 0. We almost recover their setting by using losses defined as DISPLAYFORM1 where V is a loss associated to some local cost defined in Eq.. Then, they study the stability of the algorithm which, given a dataset S, returns DISPLAYFORM2 There is a slight difference between their setting and ours, since they use losses defined for some y in the output space of the vv-RKHS, but this difference has no impact on the validity of the proofs in our case. The use of their theorem requires some assumption that are listed below. We recall the shape of the OVK we use: DISPLAYFORM3, where k X and k Θ are both bounded scalar-valued kernels, in other words there exist (κ X, κ Θ) ∈ R 2 such that sup DISPLAYFORM4 Remark 1. Assumptions 1, 2 are satisfied for our choice of kernel. Assumption 3. The application (y, h, x) → (y, h, x) is σ-admissible, i. e. convex with respect to f and Lipschitz continuous with respect to f (x), with σ as its Lipschitz constant. Assumption 4. ∃ξ ≥ 0 such that ∀(x, y) ∈ X × Y and ∀S training set, (y, h * S, x) ≤ ξ. Definition S.5.1. Let S = ((x i, y i)) n i=1 be the training data. We call S i the training data S i = ((x 1, y 1 DISPLAYFORM5 Definition S.5.2. A learning algorithm mapping a dataset S to a function h * S is said to be β-uniformly stable with respect to the loss function if ∀n ≥ 1, ∀1 ≤ i ≤ n, ∀S training set, DISPLAYFORM6 Proposition S.5.1. BID4 Let S → h * S be a learning algorithm with uniform stability β with respect to a loss satisfying Assumption 4. Then ∀n ≥ 1, ∀δ ∈, with probability at least 1 − δ on the drawing of the samples, it holds that DISPLAYFORM7 Proposition S. Quantile Regression: We recall that in this setting, v(θ, y, h(x)(θ)) = max (θ(y − h(x)(θ)), (1 − θ)(y − h(x)(θ))) and the loss is DISPLAYFORM8 Moreover, we will assume that |Y | is bounded by B ∈ R as a r. v.. We will therefore verify the hypothesis for y ∈ [−B, B] and not y ∈ R.Lemma S.5.3. In the case of the QR, the loss is σ-admissible with σ = 2κ Θ.Proof. Let h 1, h 2 ∈ H K and θ ∈. ∀x, y ∈ X × R, it holds that DISPLAYFORM9 where s = 1 y≤h1(x)(θ) and t = 1 y≤h2(x)(θ). We consider all possible cases for t and s: DISPLAYFORM10 | because of the conditions on t, s. DISPLAYFORM11 By summing this expression over the (θ j) m j=1, we get that DISPLAYFORM12 and is σ-admissible with σ = 2κ Θ.Lemma S.5.4. Let S = ((x 1, y 1),..., (x n, y n)) be a training set and λ > 0. Then ∀x, θ ∈ X ×, it holds that DISPLAYFORM13 Proof. Since h * S is the output of our algorithm and 0 ∈ H K, it holds that DISPLAYFORM14 Lemma S.5.5. Assumption 4 is satisfied for ξ = 2 B + κ X κ Θ B λ.Proof. Let S = ((x 1, y 1),..., (x n, y n)) be a training set and h * S be the output of our algorithm. DISPLAYFORM15 Corollary S.5.6. The QR learning algorithm defined in Eq. is such that ∀n ≥ 1, ∀δ ∈, with probability at least 1 − δ on the drawing of the samples, it holds that DISPLAYFORM16 Proof. This is a direct consequence of Proposition S.5.2, Proposition S.5.1, Lemma S.5.3 and Lemma S.5.5.Definition S.5.3 (Hardy-Krause variation). Let Π be the set of subdivisions of the interval Θ =. A subdivision will be denoted σ = (θ 1, θ 2, . . ., θ p) and f: Θ → R be a function. We call Hardy-Krause variation of the function f the quantity sup DISPLAYFORM17 Remark 2. If f is continuous, V (f) is also the limit as the mesh of σ goes to zero of the above quantity. In the following, let f: DISPLAYFORM18 This function is of primary importance for our analysis, since in the Quasi Monte-Carlo setting, the bound of Proposition 3.3 makes sense only if the function f has finite Hardy-Krause variation, which is the focus of the following lemma. Lemma S.5.7. Assume the boundeness of both scalar kernels k X and k Θ. Assume moreover that k Θ is C 1 and that its partial derivatives are uniformly bounded by some constant C. Then DISPLAYFORM19 Proof. It holds that DISPLAYFORM20 The supremum of the integral is smaller than the integral of the supremum, as such DISPLAYFORM21 where f x,y: θ → v(θ, y, h * S (x)(θ)) is the counterpart of the function f at point (x, y). To bound this quantity, let us first bound locally V (f x,y). To that extent, we fix some (x, y) in the following. Since f x,y is continuous (because k Θ is C 1), then using Choquet (1969, Theorem 24.6), it holds that DISPLAYFORM22 Moreover since k ∈ C 1 and ∂k θ = (∂ 1 k)(·, θ) has a finite number of zeros for all θ ∈ ×, one can assume that in the subdivision considered afterhand all the zeros (in θ) of the residuals y − h * S (x)(θ) are present, so that y − h * S (x)(θ i+1) and y − h * S (x)(θ i) are always of the same sign. Indeed, if not, create a new, finer subdivision with this property and work with this one. Let us begin the proper calculation: let σ = (θ 1, θ 2, . . ., θ p) be a subdivision of Θ, it holds that ∀i ∈ {1, . . ., p − 1}: DISPLAYFORM23 We now study the two possible outcomes for the residuals: DISPLAYFORM24 Since k Θ is C 1, with partial derivatives uniformly bounded by C, |k Θ (θ i+1, θ i+1) − k Θ (θ i+1, θ i)| ≤ C(θ i+1 − θ i) and |k Θ (θ i, θ i) − k Θ (θ i+1, θ i)| ≤ C(θ i+1 − θ i) so that |h * S (x)(θ i) − h * S (x)(θ i+1)| ≤ κ X 2BC λ θ i+1 − θ i and overall DISPLAYFORM25 • If y − h(x)(θ i+1) ≤ 0 and y − h(x)(θ i) ≤ 0 then |f x,y (θ i+1) − f x,y (θ i)| = |(1 − θ i+1)(y − h * S (x)(θ i+1)) − (1 − θ i)(y − h * S (x)(θ i))| ≤ |h * S (x)(θ i) − h * S (x)(θ i+1)| + |(θ i+1 − θ i)y| + |(θ i − θ i+1)h * S (x)(θ i+1)| + |θ i (h * S (x)(θ i) − h * S (x)(θ i+1))| so that with similar arguments one gets DISPLAYFORM26 Therefore, regardless of the sign of the residuals y − h(x)(θ i+1) and y − h(x)(θ i), one gets Eq.. Since the square root function has Hardy-Kraus variation of 1 on the interval Θ =, it holds that DISPLAYFORM27 Combining this with Eq. finally gives DISPLAYFORM28 Lemma S.5.8. Let R be the risk defined in Eq. for the quantile regression problem. Assume that (θ) m j=1 have been generated via the Sobol sequence and that k Θ is C 1 and that its partial derivatives are uniformly bounded by some constant C. Then Proof of Proposition 3.3. Combine Lemma S.5.8 and Corollary S.5.6 to get an asymptotic behaviour as n, m → ∞. To assess the quality of the estimated model by ∞-OCSVM, we illustrate the θ-property (Schölkopf et al., 2000): the proportion of inliers has to be approximately 1 − θ (∀θ ∈). For the studied datasets (Wilt, Spambase) we used the raw inputs without applying any preprocessing. Our input kernel was the exponentiated χ 2 kernel k X (x, z):= exp −γ X d k=1 (x k − z k) 2 /(x k + z k) with bandwidth γ X = 0.25. A Gauss-Legendre quadrature rule provided the integral approximation in Eq. FORMULA8, with m = 100 samples. We chose the Gaussian kernel for k Θ; its bandwidth parameter γ Θ was the 0.2−quantile of the pairwise Euclidean distances between the θ j's obtained via the quadrature rule. The margin (bias) kernel was k b = k Θ. As it can be seen in FIG6, the θ-property holds for the estimate which illustrates the efficiency of the proposed continuum approach for density level-set estimation. | We propose an extension of multi-output learning to a continuum of tasks using operator-valued kernels. | 655 | scitldr |
We analyze the joint probability distribution on the lengths of the vectors of hidden variables in different layers of a fully connected deep network, when the weights and biases are chosen randomly according to Gaussian distributions, and the input is binary-valued. We show that, if the activation function satisfies a minimal set of assumptions, satisfied by all activation functions that we know that are used in practice, then, as the width of the network gets large, the ``length process'' converges in probability to a length map that is determined as a simple function of the variances of the random weights and biases, and the activation function. We also show that this convergence may fail for activation functions that violate our assumptions. second layer may not be independent, even for some permissible φ like the ReLU. The of this section contradict claims made in BID13 BID9. Section 5 describes some simulation experiments verifying some of the findings of the paper, and illustrating the dependence among the values of the hidden nodes. Our analysis of the convergence of the length map borrows ideas from Daniely, et al. BID1, who studied the properties of the mapping from inputs to hidden representations ing from random Gaussian initialization. Their theory applies in the case of activation functions with certain smoothness properties, and to a wide variety of architectures. Our analysis treats a wider variety of values of σ w and σ b, and uses weaker assumptions on φ. For n ∈ N, we use [n] to denote the set {1, 2, . . ., n}. If T is a n × m × p tensor, then, for i ∈ [n], let T i,:,: = T i,j,k jk, and define T i,j,:, etc., analogously. Consider a deep fully connected width-N network with D layers. Let W ∈ R D×N ×N. An activation function φ maps R to R; we will also use φ to denote the function from R N to R N obtained by applying φ componentwise. Computation of the neural activity vectors x 0,:,..., x D,: ∈ R N and preactivations h 1,:,..., h D,: ∈ R N proceeds in the standard way as follows:h,: = W,:,: x −1,: + b,: x,: = φ(h,:), for = 1,..., D.We will study the process arising from fixing an arbitrary input x 0,: ∈ {−1, 1} N and choosing the parameters independently at random: the entries of W are sampled from Gauss 0, DISPLAYFORM0 Note that for all ≥ 1, all the components of h,: and x,: are identically distributed. For the purpose of defining a limit, assume that, for a fixed, arbitrary function χ: N → {−1, 1}, for finite N, we have x 0,: = (χ,..., χ(N)). For > 0, if the limit exists (in the sense of "convergence in distribution"), let x be a random variable whose distribution is the limit of the distribution of x,1 as N goes to infinity. Define h and q similarly. If P and Q are probability distributions, then d T V (P, Q) = sup E P (E) − Q(E), and if p and q are their densities, DISPLAYFORM0 In this section we characterize the length map of the hidden nodes of a deep network, for all activation functions satisfying the following assumptions. Definition 1 An activation function φ is permissible if, (a) the restriction of φ to any finite interval is bounded; (b) |φ(x)| = exp(o(x 2)) as |x| gets large. 1; and (c) φ is measurable. Conditions (b) and (c) ensure that a key integral can be computed. DISPLAYFORM0 If φ is permissible, then, since φ(cz) 2 exp(−z 2 /2) is integrable for all c, we have that q 0,...,q D,r 0,...,r D are well-defined finite real numbers. The following theorem shows that the length map q 0,..., q D converges in probability toq 0,...,q D.Theorem 2 For any permissible φ, σ w, σ b ≥ 0, any depth D, and any, δ > 0, there is an N 0 such that, for all N ≥ N 0, with probability 1 − δ, for all ∈ {0, ..., D}, we have |q −q | ≤.The rest of this section is devoted to proving Theorem 2. Our proof will use the weak law of large numbers. For any random variable X with a finite expectation, and any, δ > 0, there is an N 0 such that, for all N ≥ N 0, if X 1,..., X N are i.i.d. with the same distribution as X, then DISPLAYFORM0 In order to divide our analysis into cases, we need the following lemma, whose proof is in Appendix B.Lemma 4 If φ is permissible and not zero a.e., for all σ w > 0, for all ∈ {0, ..., D},q > 0 and r > 0.We will also need a lemma that shows that small changes in σ lead to small changes in Gauss(0, σ 2).Lemma 5 (see BID7) There is an absolute constant C such that, for all σ 1, σ 2 > 0, DISPLAYFORM1 The following technical lemma is proved in Appendix C. If φ is permissible, for all 0 < r ≤ s, for all β > 0, there is an a ≥ 0 such that, for all q ∈ [r, s], DISPLAYFORM0 Armed with these lemmas, we are ready to prove Theorem 2.First, if φ is zero a.e., or if σ w = 0, Theorem 2 follows directly from Lemma 3, together with a union bound over the layers. Assume for the rest of the proof that φ(x) is not zero a.e., and that σ w > 0, so thatq > 0 andr > 0 for all. DISPLAYFORM1,i. Our proof of Theorem 2 is by induction. The inductive hypothesis is that, for any, δ > 0 there is an N 0 such that, if N ≥ N 0, then, with probability 1 − δ, for all ≤, |q −q | ≤ and |r −r | ≤.The base case holds because q 0 =q 0 = r 0 =r 0 = 1, no matter what the value of N is. Now for the induction step; choose > 0, 0 < < min{q /4,r} and 0 < δ ≤ 1/2. (Note that these choices are without loss of generality.) Let ∈ (0,) take a value that will be described later, using quantities from the analysis. By the inductive hypothesis, whatever the value of, there is an N 0 such that, if N ≥ N 0, then, with probability 1 − δ/2, for all ≤ − 1, we have |q −q | ≤ and |r −r | ≤. Thus, to establish the inductive step, it suffices to show that, after conditioning on the random choices before the th layer, if |q −1 −q −1 | ≤, and |r −1 −r −1 | ≤, there is an N such that, if N ≥ N, then with probability at least 1 − δ/2 with respect only to the random choices of W,:,: and b,:, that |q −q | ≤ and |r −r | ≤. Given such an N, the inductive step can be satisfied by letting N 0 be the maximum of N 0 and N.Let us do that. For the rest of the proof of the inductive step, let us condition on outcomes of the layers before layer, and reason about the randomness only in the th layer. Let us further assume that |q −1 −q −1 | ≤ and |r −1 −r −1 | ≤. DISPLAYFORM2 Since we have conditioned on the values of h −1,1,..., h −1,N, each component of h,i is obtained by taking the dot-product of x −1,: = φ(h −1,:) with W,i,: and adding an independent b,i. Thus, conditioned on h −1,1,..., h −1,N, we have that h,1,..., h,N are independent. Also, since x −1,: is fixed by conditioning, each h,i has an identical Gaussian distribution. Since each component of W and b has zero mean, each h,i has zero mean. Choose an arbitrary i ∈ [N]. Since x −1,: is fixed by conditioning and W,i,1,..., W,i,N and b,i are independent, DISPLAYFORM3 We wish to emphasize the q is determined as a function of random outcomes before the th layer, and thus a fixed, nonrandom quantity, regarding the randomization of the th layer. By the inductive hypothesis, we have DISPLAYFORM4 The key consequence of this might be paraphrased by saying that, to establish the portion of the inductive step regarding q, it suffices for q to be close to its mean. Now, we want to prove something similar for r. We have DISPLAYFORM5 which gives DISPLAYFORM6 Since |q −q | ≤ σ 2 w and we may choose to ensure ≤q 2σ 2 w, we haveq /2 ≤ q ≤ 2q.For β > 0 and κ ∈ (0, 1/2) to be named later, by Lemma 6, we can choose a such that, for all q ∈ [q /2, 2q], DISPLAYFORM7 We claim that DISPLAYFORM8 So now we are trying to bound DISPLAYFORM9 Using changes of variables, we have DISPLAYFORM10 But since, for κ < 1/2, conditioning on an event of probability at least 1 − κ only changes a distribution by total variation distance at most 2κ, and therefore, applying Lemma 5 along with the fact that |q −q | ≤ σ 2 w, for the constant C from Lemma 5, we get DISPLAYFORM11 Tracing back, we have DISPLAYFORM12 If κ = min{24M, Recall that q is an average of N identically distributed random variables with a mean between 0 and 2q (which is therefore finite) and r is an average of N identically distributed random variables, each with mean between 0 andr + /2 ≤ 2r. Applying the weak law of large numbers (Lemma 3), there is an N such that, if N ≥ N, with probability at least 1 − δ/2, both |q − E[q]| ≤ /2 and |r − E[r]| ≤ /2 hold, which in turn implies |q −q | ≤ and |r −r | ≤, completing the proof of the inductive step, and therefore the proof of Theorem 2. In this section, we show that, for some activation functions, the probability distribution of hidden nodes can have some surprising properties. In this subsection, we will show that the hidden variables are sometimes not Gaussian. Our proof will refer to the Cauchy distribution. Definition 2 A distribution over the reals that, for x 0 ∈ R and γ > 0, has a density f given by FIG1 DISPLAYFORM0 DISPLAYFORM1 So, for all N, h 2,1 is Cauchy(0, √ N). Suppose that h 2,1 converged in distribution to some distribution P. Since the cdf of P can have at most countably many discontinuities, we can cover the real line by a countable set of finite-length FIG1,... whose endpoints are points of continuity for P. Since Cauchy(0, DISPLAYFORM2 Thus, the probability assigned by P to the entire real line is 0, a contradiction. The following contradicts a claim made on line 8 of Section A.1 of BID13 .Theorem 10 If φ is either the ReLU or the Heaviside function, then, for every σ w > 0, σ b ≥ 0, and N ≥ 2, FIG1 are not independent. Proof: We will show that E[h DISPLAYFORM0, which will imply that h 2,1 and h 2,2 are not independent. As mentioned earlier, because each component of h 1,: is the dot product of x 0,: with an independent row of W 1,:,: plus an independent component of b 1,:, the components of h 1,: are independent, and since x 1,: = φ(h 1,:), this implies that the components of x 1,: are independent. Since each row of W 1,:,: and each component of the bias vector has the same distribution, x 1,: is i.i.d. We have DISPLAYFORM1 The components of W 2,:,: and x 1,:, along with b 2,1, are mutually independent, so terms in the double sum with i = j have zero expectation, and E[h DISPLAYFORM2 . For a random variable x with the same distribution as the components of x 1,:, this implies DISPLAYFORM3 Similarly, DISPLAYFORM4 Putting this together with, we have ) and Gauss(0, σ 2) for σ estimated from the data (shown in red). Now, we calculate the difference using for the Heaviside and ReLU functions. DISPLAYFORM5 Heaviside. Suppose φ is Heaviside function, i.e. φ(z) is the indicator function for z > 0. In this case, since the components of h 1,: are symmetric about 0, the distribution of x 1,: is uniform over DISPLAYFORM6 DISPLAYFORM7 dz. By symmetry this is DISPLAYFORM8 Similarly, E[DISPLAYFORM9 2 . Plugging these into we get that, in the case the φ is the ReLU, that DISPLAYFORM10 completing the proof. Here, we show, informally, that for φ at the boundary of the second condition in the definition of permissibility, the recursive formula defining the length mapq breaks down. Roughly, this condition cannot be relaxed. For any α > 0, if φ is defined by φ(x) = exp(αx 2), there exists a σ w, σ b s.t.q,r is undefined for all ≥ 2. For each N ∈ {10, 100, 1000}, we (a) initialized the weights 100 times, (b) plotted the histograms of all of the values of h [2, :], along with the Cauchy(0, √ N) distribution from the proof of Proposition 9, and Gauss(0, σ 2) for σ estimated from the data. Consistent with the theory, the Cauchy(0, √ N) distribution fits the data well. To illustrate the fact that the values in the second hidden layer are not independent, for N = 1000 and the parameters otherwise as in the other experiment, we plotted histograms of the values seen in the second layer for nine random initializations of the weights in FIG5. When some of the values in the first hidden layer have unusually small magnitude, then the values in the second hidden layer coordinately tend to be large. This is in contrast with the claim made at the end of Section 2.2 of BID9. Note that this is consistent with Theorem 2 establishing convergence in probability for permissible φ, since the φ used in this experiment is not permissible. | We prove that, for activation functions satisfying some conditions, as a deep network gets wide, the lengths of the vectors of hidden variables converge to a length map. | 656 | scitldr |
Data augmentation is one of the most effective approaches for improving the accuracy of modern machine learning models, and it is also indispensable to train a deep model for meta-learning. However, most current data augmentation implementations applied in meta-learning are the same as those used in the conventional image classification. In this paper, we introduce a new data augmentation method for meta-learning, which is named as ``Task Level Data Augmentation'' (referred to Task Aug). The basic idea of Task Aug is to increase the number of image classes rather than the number of images in each class. In contrast, with a larger amount of classes, we can sample more diverse task instances during training. This allows us to train a deep network by meta-learning methods with little over-fitting. Experimental show that our approach achieves state-of-the-art performance on miniImageNet, CIFAR-FS, and FC100 few-shot learning benchmarks. Once paper is accepted, we will provide the link to code. Although the machine learning systems have achieved a human-level ability in many fields with a large amount of data, learning from a few examples is still a challenge for modern machine learning techniques. Recently, the machine learning community has paid significant attention to this problem, where few-shot learning is the common task for meta-learning (e.g., ; ; ;). The purpose of few-shot learning is to learn to maximize generalization accuracy across different tasks with few training examples. In a classification application of the few-shot learning, tasks are generated by sampling from a conventional classification dataset; then, training samples are randomly selected from several classes in the classification dataset. In addition, a part of the examples is used as training examples and testing examples. Thus, a tiny learning task is formed by these examples. The meta-learning methods are applied to control the learning process of a base learner, so as to correctly classify on testing examples. Data augmentation is widely used to improve the training of deep learning models. Usually, the data augmentation is regarded as an explicit form of regularization;;. Thus, the data augmentation aims at artificially generating the training data by using various translations on existing data, such as: adding noises, cropping, flipping, rotation, translation, etc. The general idea of data augmentations is increasing the number of data by change data slightly to be different from original data, but the data still can be recognized by human. The new data involved in the classes are identical to the original data. However, the minimum units of meta-learning are the tasks rather than data. Increasing the data of original class cannot increase the types of task instances. Therefore, "Task Aug" increases the data that can be clearly recognized as the different classes as the original data. With novel classes, the more diverse task instances can be generated. This is important for the meta-learning, since metalearning models must predict unseen classes during the testing phase. Therefore, a larger number of classes is helpful for models to generate task instances with different classes. In this work, the natural images are augmented by being rotated 90, 180, 270 degrees (we show examples in Figure 1). We compare two cases, 1) the new images are converted to the classes of original images and 2) the new images are separated to the new classes. The proposed method is evaluated by experiments with the state of art meta-learning . The experimental analysis shows that Task Aug can reduce over-fitting and improve the performance, while the conventional data augmentation (referred to Data Aug) of rotation, which converts the novel data into the classes of original data, does not improve the performance and even causes the worse . In the comparative experiments, Task Aug achieves the best accuracy of the meta-learning methods applied. Besides, the best of our experiments exceed the current state-of-art over a large margin. Meta-learning involves two hierarchies learning processes: low-level and high-level. The low-level learning process learns to deal with general tasks, often termed as the "inner loop"; and the highlevel learning process learns to improve the performance of a low-level task, often termed as the "outer loop". Since models are required to handle sensory data like images, deep learning methods are often applied for the "outer loop". However, the machine learning methods applied for the "inner loop" are very diverse. Based on different methods in the "inner loop", meta-learning can be applied in image recognition;;;; , image generation;; , reinforce learning; , and etc. This work focuses on few-shot learning image recognition based on meta-learning. Therefore, in the experiment, the methods applied in the "inner loop" are able to classify data, and they are K-nearest neighbor (KNN), Support Vector Machine (SVM) and ridge regression, respectively;;. Previous studies have introduced many popular regularization techniques to few-shot learning from deep learning, such as weight decay, dropout, label smooth, and data augmentation. Common data augmentation techniques for image recognition are usually designed manually and the best augmentation strategies depend on dataset. However, in natural color image datasets, random cropping and random horizontal flipping are the most common. Since the few-shot learning tasks consist of natural color images, the random horizontal flipping and random cropping are applied in few-shot learning. In addition, color (brightness, contrast, and saturation) jitter is often applied in the works of few-shot learning;. Other data augmentation technologies related to few-shot learning include generating samples by few-shot learning and generating samples for few-shot learning. The former tried to synthesize additional examples via transferring, extracting, and encoding to create the data of the new class, that are intra-class relationships between pairs of reference classes' data instances;. The later tried to apply meta-learning in a few-shot generation to generate samples from other models. In addition to these two types of studies, the data augmentation technology most closed to the new proposed approach is applied to Omniglot dataset, which consists of handwritten words. They created the novel classes by rotating the original images 90, 180 and 270 degrees. However, this approach cannot be applied for the natural color image directly, and we will explain the reasons and the solutions in Section 3. We adopt the formulation purposed by to describe the N -way K-shot task. A few-shot task contains many task instances (denoted by T i), each instance is a classification problem consisting of the data sampled from N classes. The classes are randomly selected from a classes set. The classes set are split into M tr, M val and M test for a training class set C tr, a validation classes set C val, and a test classes set C test. In particular, each class cannot overlap others (i.e., the classes used during testing are unseen classes during training). Data is randomly sampled from C tr, C val and C test, so as to create task instances for training meta-set S tr, validation meta-set S val, and test meta-set S test, respectively. The validation and testing meta-sets are used for model selection and final evaluation, respectively. The data in each task instance, T i, are divided into training examples D tr and validation examples D val. Both of them only contains the data from N classes which sampled from the appropriate classes set randomly (for a task instance applied during training, the classes form a subset of the training classes set C tr). In most settings, the training set.. K} consists of K data instances from each class, this processing usually called as a "shot". The validation set, D val, consists of several other data instances from the same classes, this processing is usually called as a "query". An evaluation is provided for generalization performance on the N classification task instance D tr. Note that: the validation set of a task instance D val (for optimizing model during "outer loop") is different from the held-out validation classes set C val and meta-set S val (for model selection). This work is to increase the size of the training classes set, M tr, by rotating all images within the training classes set with 90, 180, 270 degrees. The size, M tr, is increased for three times. In the Omniglot dataset consisting of handwritten words , this approach works well, since it can rotate a handwritten word multiple of 90 degrees and treat the new one as another word; in addition, it is really possible that the novel word is similar to some words, which are not included in the training classes but existed. However, for natural images, it is not the same cause. For examples, the images in the third line of Figure 1 are difficult to identify which images are rotated. Moreover, the images are rarely rotated in the photos taken by humans. Despite the two problems, the fundamental features of the novel images can provide useful information. We assign novel classes that contain smaller weights than the original classes, so as to make models prioritize learning the features of the original classes, and make the features of the novel classes as a supplement to prevent the augmented data from taking up large capacity in the model. The smaller weights are implemented in two ways, 1) lower probability and 2) delay selecting the novel classes. For a class in a task instance, the probability of the class coming from the novel classes is p, and the probability coming from the original classes is 1 − p. Besides, The initial p is set to 0, then linearly rises from 0 to p max after generating T task instances. The max probability p max is set lower than the proportion of the novel classes in all classes to make each novel class have a lower probability than each original class. The whole process of Task Aug on a classes set is summarized in Algorithm 1 and Figure 2. In this work, we also compare the methods with the training protocol with ensemble method in addition to the standard training protocol, which choosing a model by the validation set. The training protocol with an ensemble method use the models with different training epoch to Algorithm 1 Task Level Data Augmentation. Require: Classes set C = {c 1, c 2, . . ., c M}; Max possibility for Task Aug p max; The delay to Task Aug T; The current count t; The number of ways, shots and queries N, K, H 1: Rotate all x ∈ {x|(x, y) ∈ D} 90r degrees 17: an ensemble model, in order to better use the models obtained in a single training process, and this approach has been proved to be valid for meta-learning by experiments. We adopt this ensemble method. However, unlike and that we did not use cyclic annealing for learning rate and any methods to select models. We directly took the average of the prediction of all models, which are saved according to an interval of 1 epoch. In Section 4, the methods with this ensemble approach are marked by "+ens". We evaluate the proposed method on few-shot learning tasks. In order to ensure fair, both the of baseline and Task Aug were run in our own environment. The comparative experiment is designed to answer the following questions: Is Task Aug able to improve the performance of meta-learning? How much should the probably for the novel classes be set? Will converting the novel data into the classes of the original data cause worse , which are generated by being rotated 90, 180, 270 degrees?, we used ResNet-12 network in our experiments. The ResNet-12 network had four residual blocks which contains three 3 × 3 convolution, batch normalization and Leaky ReLU with 0.1 negative slope. One 2 × 2 max-pooling layer is used for reducing the size of the feature map. The numbers of the network channels were 64, 160, 320 and 640, respectively. DropBlock regularization is used in the last two residual blocks, the conventional dropout is used in the first two residual blocks. The block sizes of DropBlock were set to 2 and 5 for CIFAR derivatives and ImageNet derivatives, respectively. In all experiments, the dropout possibility was set to 0.1. The global average pooling was not used for the final output of the last residual block. For ProtoNets, we did not use a higher way for training than testing like. Instead, the equal number of shot and way were used in both training and evaluation, and its output multiplied by a learnable scale before the softmax following;, For M-SVM, we set training shot to 5 for CIFAR-FS; 15 for FC100; and 15 for miniImageNet; regularization parameter of SVM was set to 0.1; and a learnable scale was used following. We did not use label smoothing like, because we did not find that label smoothing can improve the performance in our environment. This was also affirmed from the author's message on GitHub, that Program language packages and environment might affect of the meta-learning method. For R2-D2, we set the same training shot as for M-SVM, and used a learnable scale and bias following. It was different from we used a fixed regularization parameter of ridge regression which was set to 50 because has confirmed that making it learnable might not be helpful. Last, for all methods, each class in a task instance contained 6 test (query) examples during training and 15 test (query) examples during testing. Stochastic gradient descent (SGD) was used. , we set weight decay and Nesterov momentum to 0.0005 and 0.9, respectively. Each mini-batch contained 8 task instances. The meta-learning model was trained for 60 epochs, and 1000 mini-batchs for each epoch. We set the initial learning rate to 0.1, then multiplied it by 0.06, 0.012, and 0.0024 at epochs 20, 40 and 50, respectively, as in. The , which are marked by "ens" were used the 60 models saved after each epoch to become an ensemble model. For the final epoch, the training classes set was augmented by the validation classes set. We chose the model at the epoch where we got the best model during training on the training classes set only. The of the final run are marked by "+val" in this subsection. For data augmentation, we adopted random random crop, horizontal flip, and color (brightness, saturation, and contrast) jitter data augmentation following the work of;. We set p max to 0.5 for CIFAR-FS and FC100; 0.25 for miniImageNet; and T was set to 80000 for all experiments. The FC100 are also derived from CIFAR-100 , and the 100 classes are grouped into 20 superclasses. The training, validation, and testing classes contain 60 classes from 12 superclasses, 20 classes from 4 superclasses, and 20 classes from 4 superclasses, respectively. The target is to minimize the information overlap between classes to make it more challenging than current few-shot classification tasks. Same as CIFAR-FS, there are 600 nature color images of size 32 × 32 in each class. Results. In Table 1, we compare our with the previous studies, and the table shows that the highest accuracies of our experiments exceeded the current state-of-art accuracies from 3% to 5%. Besides, Table 2 and Table 3 summarize the on the CIFAR-FS and FC100 5-way tasks, and in most cases our method rises accuracy by 0.5%-3%. Figure 3: The accuracies (%) on meta-test sets with varying probability p max for the novel classes. The 95% confidence interval is denoted by the shaded region. In general, the performance of Task Aug on most of the regimes is better than Data Aug and baseline. To identify whether the rotation multi 90 degrees for Task Aug is better than that for Data Aug, we analyzed the experiment on CIFAR-FS and miniImageNet. The linear rising of p was also used for Data Aug, and T = 80000 for both Task Aug and Data Aug. In the analysis, the training classes set was not augmented by the validation classes set. As shown in Figure 3, we observed that: with p max, the accuracy rises at first, reaches the peaks between 0.25 and 0.5, then declines and reaches baseline when p max = 0.75 at the end, which is the proportion of the novel classes in all classes. On the other hand, the rotation multi 90 degrees for Data Aug can not improve or even cause worse performance. | We propose a data augmentation approach for meta-learning and prove that it is valid. | 657 | scitldr |
In this paper, we present a general framework for distilling expectations with respect to the Bayesian posterior distribution of a deep neural network, significantly extending prior work on a method known as ``Bayesian Dark Knowledge. " Our generalized framework applies to the case of classification models and takes as input the architecture of a ``teacher" network, a general posterior expectation of interest, and the architecture of a ``student" network. The distillation method performs an online compression of the selected posterior expectation using iteratively generated Monte Carlo samples from the parameter posterior of the teacher model. We further consider the problem of optimizing the student model architecture with respect to an accuracy-speed-storage trade-off. We present experimental investigating multiple data sets, distillation targets, teacher model architectures, and approaches to searching for student model architectures. We establish the key that distilling into a student model with an architecture that matches the teacher, as is done in Bayesian Dark Knowledge, can lead to sub-optimal performance. Lastly, we show that student architecture search methods can identify student models with significantly improved performance. Deep learning models have shown promising in the areas including computer vision, natural language processing, speech recognition, and more (; a; b; ;). However, existing point estimation-based training methods for these models may in predictive uncertainties that are not well calibrated, including the occurrence of confident errors. It is well-known that Bayesian inference can often provide more robust posterior predictive distributions in the classification setting compared to the use of point estimation-based training. However, the integrals required to perform Bayesian inference in neural network models are also well-known to be intractable. Monte Carlo methods provide one solution to representing neural network parameter posteriors as ensembles of networks, but this can require large amounts of both storage and compute time . To help overcome these problems, introduced an interesting model training method referred to as Bayesian Dark Knowledge. In the classification setting, Bayesian Dark Knowledge attempts to compress the Bayesian posterior predictive distribution induced by the full parameter posterior of a "teacher" network into a "student" network. The parameter posterior of the teacher network is represented through a Monte Carlo ensemble of specific instances of the teacher network (the teacher ensemble), and the analytically intractable posterior predictive distributions are approximated as Monte Carlo averages over the output of the networks in the teacher ensemble. The major advantage of this approach is that the computational complexity of prediction at test time is drastically reduced compared to computing Monte Carlo averages over a large ensemble of networks. As a , methods of this type have the potential to be much better suited to learning models for deployment in resource constrained settings. In this paper, we present a Bayesian posterior distillation framework that generalizes the Bayesian Dark Knowledge approach in several significant directions. The primary modeling and algorithmic contributions of this work are: we generalize the target of distillation in the classification case from the posterior predictive distribution to general posterior expectations; we generalize the student architecture from being restricted to match the teacher architecture to being a free choice in the distillation procedure. The primary empirical contributions of this work are evaluating the distillation of both the posterior predictive distribution and expected posterior entropy across a range of models and data sets including manipulations of data sets that increase posterior uncertainty; and evaluating the impact of the student model architecture on distillation performance including the investigation of sparsity-inducing regularization and pruning for student model architecture optimization. The key empirical findings are that distilling into a student model that matches the architecture of the teacher, as in , can be sub-optimal; and student architecture optimization methods can identify significantly improved student models. We note that the significance of generalizing distillation to arbitrary posterior expectations is that it allows us to capture a wider range of useful statistics of the posterior that are of interest from an uncertainty quantification perspective. As noted above, we focus on the case of distilling the expected posterior entropy in addition to the posterior predictive distribution itself. When combined with the entropy of the posterior predictive distribution, the expected posterior entropy enables disentangling model uncertainty (epistemic uncertainty) from fundamental uncertainty due to class overlap (aleatoric uncertainty). This distinction is extremely important in determining why predictions are uncertain for a given data case. Indeed, the difference between these two terms is the basis for the Bayesian active learning by disagreement (BALD) score used in active learning, which samples instances with the goal of minimizing model uncertainty . The remainder of this paper is organized as follows. In the next section, we begin by presenting material and related work in Section 2. In Section 3, we present the proposed framework and associated Generalized Posterior Expectation Distillation (GPED) algorithm. In Section 4, we present experiments and . Additional details regarding data sets and experiments can be found in Appendix A, with supplemental included in Appendix B. In this section we present material on Bayesian inference for neural networks, and related work on approximate inference, and model compression and pruning. Let p(y|x, θ) represent the probability distribution induced by a deep neural network classifier over classes y ∈ Y = {1, .., C} given feature vectors x ∈ R D. The most common way to fit a model of this type given a data set D = {(x i, y i)|1 ≤ i ≤ N } is to use maximum conditional likelihood estimation, or equivalently, cross entropy loss minimization (or their penalized or regularized variants). However, when the volume of labeled data is low, there can be multiple advantages to considering a full Bayesian treatment of the model. Instead of attempting to find the single (locally) optimal parameter set θ * according to a given criterion, Bayesian inference uses Bayes rule to define the posterior distribution p(θ|D, θ 0) over the unknown parameters θ given a prior distribution P (θ|θ 0) with prior parameters θ 0 as seen in Equation 1. For prediction problems in machine learning, the quantity of interest is typically not the parameter posterior itself, but the posterior predictive distribution p(y|x, D, θ 0) obtained from it as seen in Equation 2. The primary problem with applying Bayesian inference to neural network models is that the distributions p(θ|D, θ 0) and p(y|x, D, θ 0) are not available in closed form, so approximations are required, which we discuss in the next section. Most Bayesian inference approximations studied in the machine learning literature are based on variational inference (VI) or Markov Chain Monte Carlo (MCMC) methods . In VI, an auxiliary distribution q φ (θ) is defined to approximate the true parameter posterior p(θ|D, θ 0). The variational parameters φ are selected to minimize the Kullback-Leibler (KL) divergence between q φ (θ) and p(θ|D, θ 0). first studied applying VI to neural networks. later presented a method based on stochastic VI with improved scalability. In the closely related family of expectation propagation (EP) methods , present an online EP algorithm for neural networks with the flexibility of representing both continuous and discrete weights. Hernández- present the probabilistic backpropagation (PBP) algorithm for approximate Bayesian learning of neural network models, which is an example of an assumed density filtering (ADF) algorithm that, like VI and EP, generally relies on simplified posterior densities. The main drawback of VB, EP, and ADF is that they all typically in biased posterior estimates for complex posterior distributions. MCMC methods provide an alternative family of sampling-based posterior approximations that are unbiased, but are often computationally more expensive to use at training time. MCMC methods allow for drawing a correlated sequence of samples θ t ∼ p(θ|D, θ 0) from the parameter posterior. These samples can then be used to approximate the posterior predictive distribution as a Monte Carlo average as shown in Equation 3. Neal addressed the problem of Bayesian inference in neural networks using Hamiltonian Monte Carlo (HMC) to provide a set of posterior samples. A bottleneck with this method is that it uses the full dataset when computing the gradient needed by HMC, which is problematic for larger data sets. While this scalability problem has largely been solved by more recent methods such as stochastic gradient Langevin dynamics (SGLD) , the problem of needing to compute over a large set of samples when making predictions at test or deployment time remains. Bayesian Dark Knowledge is precisely aimed at reducing the test-time computational complexity of Monte Carlo-based approximations for neural networks. In particular, the method uses SGLD to approximate the posterior distribution using a set of posterior parameter samples. These samples can be thought of as an ensemble of neural network models with identical architectures, but different parameter values. This posterior ensemble is used as the "teacher" in a distillation process that trains a single "student" model to match the teacher ensemble's posterior predictive distribution . The major advantage of this approach is that it can drastically reduce the test time computational complexity of posterior predictive inference relative to using a Monte Carlo average computed using many samples. Finally, we note that with the advent of Generative Adversarial Networks , there has also been work on generative models for approximating posterior sampling. and both propose methods for learning to generate samples that mimic those produced by SGLD. However, while these approaches may provide a speed-up relative to running SGLD itself, the ing samples must still be used in a Monte Carlo average to compute a posterior predictive distribution in the case of Bayesian neural networks. This is again a potentially costly operation and is exactly the computation that Bayesian Dark Knowledge addresses. As noted above, the problem that Bayesian Dark Knowledge attempts to solve is reducing the test-time computational complexity of using a Monte-Carlo posterior to make predictions. In this work, we are particularly concerned with the issue of enabling test-time speed-storage-accuracy trade-offs. The relevant material includes methods for network compression and pruning. Previous work has shown that overparameterised deep learning models tend to show much better learnability. Further, it has also been shown that such overparameterised models rarely use their full capacity and can often be pruned back substatially without significant loss of generality (; ; ; ; ; ;).; are some examples that use Group LASSO regularization at their core. use hierarchical priors to prune neurons instead of weights. An advantage of these methods over ones which induce connection-based sparsity is that these methods directly produce smaller networks after pruning (e.g., fewer units or channels) as opposed to networks with sparse weight matrices. This makes it easier to realize the ing computational savings, even on platforms that do not directly support sparse matrix operations. In this section, we describe our proposed framework for distilling general Bayesian posterior expectations for neural network classification models and discuss methods for enabling test-time speed-storage-accuracy trade-offs for flexible deployment of the ing models. There are many possible inferences of interest given a Bayesian parameter posterior P (θ|D, θ 0). We consider the general case of inferences that take the form of posterior expectations as shown in Equation 4 where g(y, x, θ) is an arbitrary function of y, x and θ. Important examples of functions g(y, x, θ) include g(y, x, θ) = p(y|x, θ), which in a posterior expectation yielding the posterior predictive distribution p(y|x, D, θ 0), as used in Bayesian Dark Knowledge; g(y, x, θ) = C y =1 p(y |x, θ) log p(y |x, θ), which yields the posterior predictive entropy H(y|x, D, θ 0) 1; and g(y, x, θ) = p(y|x, θ)(1 − p(y|x, θ)), which in the posterior marginal variance σ 2 (y|x, D, θ 0). While the posterior predictive distribution p(y|x, D, θ 0) is certainly the most important posterior inference from a predictive standpoint, the entropy and variance are also important from the perspective of uncertainty quantification. Our goal is to learn to approximate posterior expectations E p(θ|D,θ 0) [g(y, x, θ)] under a given teacher model architecture using a given student model architecture. The method that we propose takes as input the teacher model p(y|x, θ), the prior p(θ|θ 0), a labeled data set D, an unlabeled data set D, the function g(y, x, θ), a student model f (y, x|φ), an online expectation estimator, and a loss function (·, ·) that measures the error of the approximation given by the student model f (y, x|φ). Similar to , we propose an online distillation method based on the use of the SGLD sampler. We describe all of the components of the framework in the sections below, and provide a complete description of the ing method in Algorithm 1. We define the prior distribution over the parameters p(θ|θ 0) to be a spherical Gaussian distribution centered at µ = 0 with precision τ (we thus have θ 0 = [µ, τ]). We define S to be a minibatch of size M drawn from D. θ t denotes the parameter set sampled for the teacher model at sampling iteration t, while η t denotes the step size for the teacher model at iteration t. The Langevin noise is denoted by z t ∼ N (0, η t I). The sampling update for SGLD is given by: 1 Note that the posterior predictive entropy represents the average entropy integrated over the parameter posterior. It is not equal to the entropy of the posterior predictive distribution p(y|x, D, θ 0) in general. Distillation Procedure: For the distillation learning procedure, we make use of a secondary unlabeled data set D = {x i |1 ≤ i ≤ N}. This data set could use feature vectors from the primary data set D, or a larger data set. We note that due to autocorrelation in the sampled teacher model parameters θ t, we may not want to run a distillation update for every Monte Carlo sample drawn. We thus use two different iteration indices: t for SGLD iterations and s for distillation iterations. On every distillation step s, we sample a minibatch S from D of size M. For every data case i in S, we update an estimateĝ yis of the posterior expectation using the most recent parameter sample θ t, obtaining an updated estimateĝ yis+1 ≈ E p(θ|D,θ 0) [g(y, x, θ)] (we discuss update schemes in the next section). Next, we use the minibatch of examples S to update the student model. To do so, we take a step in the gradient direction of the regularized empirical risk of the student model as shown below where α s is the student model learning rate, R(φ) is the regularizer, and λ is the regularization hyper-parameter. We next discuss the estimation of the expectation targetsĝ yis. Expectation Estimation: Given an explicit collection of posterior samples θ 1,..., θ s, the standard Monte Carlo estimate of However, this estimator requires retaining the sequence of samples θ 1,..., θ s, which may not be feasible in terms of storage cost. Instead, we consider the application of an online update function. We define m is to be the count of the number of times data case i has been sampled up to and including distillation iteration s. An online update function U (ĝ yis, θ t, m is) takes as input the current estimate of the expectation, the current sample of the model parameters, and the number of times data case i has been sampled, and produces an updated estimate of the expectationĝ yis+1. Below, we define two different versions of the function. U s (ĝ yis, θ t, m is), updatesĝ yis using the current sample only, while U o (ĝ yis, θ t, m is) performs an online update equivalent to a full Monte Carlo average. We note that both update functions provide unbiased estimates of The online update U o will generally in much lower variance in the estimated values ofĝ yis, but it comes at the cost of needing to explicitly maintain the expectation estimatesĝ yis across learning iterations, increasing the storage cost of the algorithm. It is worthwhile noting that the extra storage and computation cost required by U o grows linearly in the size of the training set for the student. By contrast, the fully stochastic update is memoryless in terms of past expectation estimates, so the estimated expectationsĝ yis do not need to be retained across iterations ing in a space savings. We show a complete description of the proposed method in Algorithm 1. The algorithm takes as input the teacher model p(y|x, θ), the parameters of the prior P (θ|θ 0), a labeled data set D, an unlabeled data set D, the function g(y, x, θ), the student model f (y, x|φ), an online expectation estimator U (ĝ yis, θ t, m is), a loss function (·, ·) that measures the error of the approximation given by f (y, x|φ), a regularization function R and regularization hyper-parameter λ, minibatch sizes M and M, the thinning interval parameter H, the SGLD burn-in time parameter B and step size schedules for the step sizes η t and α s. We note that the original Bayesian Dark Knowledge method is recoverable as a special case of this framework via the the choices g(y, x, θ) = p(y|x, θ), (p, q) = −p log(q), U = U s and p(y|x, θ) = f (y, x, φ) (e.g., the architecture of the student is selected to match that of the teacher). The original approach also uses a distillation data set D obtained from D by adding randomly Initialize s = 0, φ 0, θ 0,ĝ yi0 = 0, m i0 = 0,η 0 3: Sample S from D with |S| = M 5: if mod (t, H) = 0 and t > B then Sample S from D with |S | = M 8: end for 12: 13: end if end for 16: end procedure generated noise to instances from D on each distillation iteration, taking advantage of the fact that the choice U = U s means that no aspect of the algorithm scales with |D |. Our general framework allows for other trade-offs, including reducing the variance in the estimates ofĝ yis at the cost of additional storage in proportion to |D |. We also note that note that the loss function (p, q) = −p log(q) and the choice g(y, x, θ) = p(y|x, θ) are somewhat of a special case when used together as even when the full stochastic expectation update U s is used, the ing distillation parameter gradient is unbiased. To distill posterior entropy, we set g(y, x, θ) = y∈Y p(y|x, θ) log p(y|x, θ), U = U o and (h, h) = |h − h |. One of the primary motivations for the original Bayesian Dark Knowledge approach is that it provides an approximate inference framework that in significant computational and storage savings at test time. However, a drawback of the original approach is that the architecture of the student is chosen to match that of the teacher. As we will show in Section 4, this will sometimes in a student network that has too little capacity to represent a particular posterior expectation accurately. On the other hand, if we plan to deploy the student model in a low resource compute environment, the teacher architecture may not meet the specified computational constraints. In either case, we need a general approach for selecting an architecture for the student model. To begin to explore this problem, we consider to basic approaches to choosing student model architectures that enable trading off test time inference speed and storage for accuracy. A helpful aspect of the distillation process relative to a de novo architecture search problem is that the architecture of the teacher model is available as a starting point. As a first approach, we consider wrapping the proposed GPED algorithm with an explicit search over a set of student models that are "close" to the teacher. Specifically, we consider a search space obtained by starting from the teacher model and applying a width multiplier to the width of every fully connected layer and a kernel multiplier to the number of kernels in every convolutional layer. While this search requires exponential time in the number of layers, it provides a baseline for evaluating other methods. As an alternative approach with better computational complexity, we leverage the regularization function R(φ) included in the GPED framework to prune a large initial network using group 1 / 2 regularization . To apply this approach, we first must partition the parameters in the parameter vector φ across K groups G k. The form of the regularizer is As is well-established in the literature, this regularizer causes all parameters in a group to go to zero simultaneously when they are not needed in a model. To use it for model pruning for a unit in a fully connected layer, we collect all of that unit's inputs into a group. Similarly, we collect all of the incoming weights for a particular channel in a convolution layer together into a group. If all incoming weights associated with a unit or a channel have magnitude below a small threshold, we can explicitly remove them from the model, obtaining a more compact architecture. We also fine-tune our models after pruning. Finally, we note that any number of weight compressing, pruning, and architecture search methods could be combined with the GPED framework. Our goal is not to exhaustively compare such methods, but rather to demonstrate that GPED is sensitive to the choice of student model to highlight the need for additional research on the problem of selecting student model architectures. In this section, we present experiments and evaluating the proposed approach using multiple data sets, posterior expectations, teacher model architectures, student model architectures and basic architecture search methods. We begin by providing an overview of the experimental protocols used. Data Sets: We use the MNIST and CIFAR10 data sets as base data sets in our experiments. In the case of MNIST, posterior predictive uncertainty is very low, so we introduce two different modifications to explore the impact of uncertainty on distillation performance. The first modification is simply to subsample the data. The second modification is to introduce occlusions into the data set using randomly positioned square masks of different sizes, ing in masking rates from 0% to 86.2%. For CIFAR10, we only use sub-sampling. Full details for both data sets and the manipulations applied can be found in Appendix A.1. We evaluate a total of three teacher models in this work: a three-layer fully connected network (FCNN) for MNIST matching the architecture used by , a four-layer convolutional network for MNIST, and a five-layer convolutional network for CIFAR10. Full details of the teacher model architectures are given in Appendix A.2. For exhaustive search for student model architectures, we use the teacher model architectures as base models and search over a space of layer width multipliers K 1 and K 2 that can be used to expand sets of layers in the teacher models. A full description of the search space of student models can be found in Appendix A.2. Distillation Procedures: We consider distilling both the posterior predictive distribution and the posterior entropy, as described in the previous section. For the posterior predictive distribution, we use the stochastic expectation estimator U s while for entropy we used the full online update U o. We allow B = 1000 burn-in iterations and total of T = 10 6 training iterations. The prior hyper-parameters, learning rate schedules and other parameters vary by data set or distillation target and are fully described in Appendix A.2. For this experiment, we use the MNIST and CIFAR10 datasets without any subsampling or masking. For each dataset and model, we consider separately distilling the posterior predictive distribution and the posterior entropy. We fix the architecture of the student to match that of the teacher. To evaluate the performance while distilling the posterior predictive distribution, we use the negative log-likelihood (NLL) of the model on the test set. For evaluating the performance of distilling posterior entropy, we use the mean absolute difference between the teacher ensemble's entropy estimate and the student model output on the test set. The are given in Table 1. First, we note that the FCNN NLL on MNIST closely replicate the in , as expected. We also note that the error in the entropy is low for both the FCNN and CNN architectures on MNIST. However, the student model fails to match the NLL of the teacher on CIFAR10 and the entropy MAE is also relatively high. In Experiment 2, we will investigate the effect of increasing uncertainty on models applied to both data sets, while in Experiment 3 we will search for student model architectures that improve performance. No. of training samples Difference between teacher and student posterior entropy estimates on test data set. In the plots above, S denotes the student and T denotes the teacher. This experiment builds on Experiment 1 by exploring methods for increasing posterior uncertainty on MNIST (sub-sampling and masking) and CIFAR10 (sub-sampling). We consider the cross product of four sub-sampling rates and six masking rates for MNIST and three sub-sampling rates for CIFAR10. We consider the posterior predictive distribution and posterior entropy distillation targets. For the posterior predictive distribution we report the negative log likelihood (NLL) of the teacher, and the NLL gap between the teacher and student. For entropy, we report the mean absolute error between the teacher ensemble and the student. All metrics are evaluated on held-out test data. We also restrict the experiment to the case where the student architecture matches the teacher architecture, mirroring the Bayesian Dark Knowledge approach. In Figure 1, we show the for the convolutional models on MNIST and CIFAR10 respectively. The FCNN are similar to the CNN on MNIST and are shown in Figure 4 in Appendix B. In Appendix B, we also provide a performance comparison between the U o and U s estimators while distilling posterior expectations. As expected, the the NLL of the teacher decreases as the data set size decreases. We observe that changing the number of training samples has a similar effect on NLL gap for both CIFAR10 and MNIST. More specifically, for any fixed masking rate of MNIST (and zero masking rate for CIFAR10), we can see that the NLL difference between the student and teacher decreases with increasing training data. However, for MNIST we can see that the teacher NLL increases much more rapidly as a function of the masking rate. Moreover, the gap between the teacher and student peaks for moderate values of the masking rate. This fact is explained through the observation that when the masking rate is low, posterior uncertainty is low, and distillation is relatively easy. On the other hand, when the masking rate is high, the teacher essentially outputs the uniform distribution for every example, which is very easy for the student to represent. As a , the moderate values of the masking rate in the hardest distillation problem and thus the largest performance gap. For varying masking rates, we see exactly the same trend for the gap in posterior entropy predictions on MNIST. However, the gap for entropy prediction increases as a function of data set size for CIFAR10. Finally, as we would expect, the performance of distillation using the U o estimator is almost always better than that of the U s estimator (refer Appendix B). The key finding of this experiment is simply that the quality of the approximations provided by the student model varies as a function of properties of the underlying data set. Indeed, restricting the student architecture to match the teacher can sometimes in significant performance gaps. In the next experiment, we address the problem of searching for improved student model architectures. In this experiment, we compare the exhaustive search to the group 1 / 2 (group lasso) regularizer combined with pruning. For the pruning approach, we start with the largest student model considered under exhaustive search, and prune back from there using different regularization parameters λ, leading to different student model architectures. We present in terms of performance versus computation time (estimated in FLOPS), as well as performance vs storage cost (estimated in number of parameters). As performance measures for the posterior predictive distribution, we consider accuracy and negative log likelihood. For entropy, we use mean absolute error. In all cases are reported on test data. We consider both fully connected and convolutional models. Figure 2 shows for negative the log likelihood (NLL) of the convolutional model on MNIST with masking rate 29% and 60,000 training samples. We select this setting as illustrative of a difficult case for posterior predictive distribution distillation. We plot NLL vs FLOPS and NLL vs storage for all points encountered in each search. The solid blue line indicates the Pareto frontier. First, we note that the baseline student model (with architecture matching the teacher) from Experiment 2 on MNIST achieves an NLL of 0.469 at approximately 0.48 × 10 6 FLOPs and 0.03 × 10 6 parameters on this configuration of the data set. We can see that both methods for selecting student architectures provide a highly significant improvement over the baseline student architectures. On MNIST, the NLL is reduced to 0.30. Further, we can also see that the group 1 / 2 approach is able to obtain much better NLL at the same computation and storage cost relative to the exhaustive search method. Lastly, the group 1 / 2 method is able to obtain models on MNIST at less than 50% the computational cost needed by the baseline model with only a small loss in performance. Results for other models and distillation targets show similar trends and are presented in Appendix B. Additional experimental details are given in Appendix A.2. In summary, the key finding of this experiment is that the capacity of the student model has a significant impact on the performance of the distillation procedure, and methods for optimizing the student architecture are needed to achieve a desired speed-storage-accuracy trade-off. We have presented a framework for distilling expectations with respect to the Bayesian posterior distribution of a deep neural network that generalizes the Bayesian Dark Knowledge approach in several significant directions. Our show that the performance of posterior distillation can be highly sensitive to the architecture of the student model, but that basic architecture search methods can help to identify student model architectures with improved speed-storage-accuracy trade-offs. There are many directions for future work including considering the distillation of a broader class of posterior statistics including percentiles, assessing and developing more advanced student model architecture search methods, and applying the framework to larger state-of-the-art models. A DATASETS AND MODEL DETAILS As noted earlier in the paper, the original empirical investigation of Bayesian Dark Knowledge for classification focused on the MNIST data set . However, the models fit to the MNIST data set have very low posterior uncertainty and we argue that it is thus a poor benchmark for assessing the performance of posterior distillation methods. In this section, we investigate two orthogonal modifications of the standard MNIST data set to increase uncertainty: reducing the training set size and masking regions of the input images. Our goal is to produce a range of benchmark problems with varying posterior predictive uncertainty. We also use the CIFAR10 data set in our experiments and employ the same subsampling technique. The full MNIST dataset consists of 60,000 training images and 10,000 test images, each of size 28 × 28, distributed among 10 classes. As a first manipulation, we consider sub-sampling the labeled training data to include 10,000, 20,000, 30,000 or all 60,000 data cases in the primary data set D when performing posterior sampling for the teacher model. Importantly, we use all 60,000 unlabeled training cases in the distillation data set D. This allows us de-couple the impact of reduced labeled training data on posterior predictive distributions from the effect of the amount of unlabeled data available for distillation. As a second manipulation, we generate images with occlusions by randomly masking out parts of each available training and test image. For generating such images, we randomly choose a square m × m region (mask) and set the value for pixels in that region to 0. Thus, the masking rate for a 28 × 28 MNIST image corresponding to the mask of size m × m is given by r = m×m 28×28. We illustrate original and masked data in Figure 3. We consider a range of square masks ing in masking rates between 0% and 86.2%. The full CIFAR10 dataset consists of 50,000 training images and 10,000 test images, each of size 32 × 32 pixels. We sub-sample the data into a primary training sets D containing 10,000, 20,000, and 50,000 images. As with MNIST, the sub-sampling is limited to training the teacher model only and we utilize all the 50,000 unlabeled training images in the distillation data set D. To demonstrate the generalizability of our methods to a range of model architectures, we run our experiments with both fully-connected, and convolutional neural networks. We note that our goal in this work is not to evaluate the GPED framework on state-of-the-art architectures, but rather to provide illustrative and establish methodology for assessing the impact of several factors including the level of uncertainty and the architecture of the student model. Teacher Models: We begin by defining the architectures used for the teacher model as follows: We use a 3-layer fully connected neural network. The architecture used is: Input For a CNN, we use two consecutive sets of 2D convolution and maxpooling layers, followed by two fully-connected layers. The architecture used is: Input(1,)-Conv(num kernels=10, kernel size=4, stride=1) -MaxPool(kernel size=2) -Conv(num kernels=20, kernel size=4, stride=1) -MaxPool(kernel size=2) -FC -FC (output). Similar to the CNN architecture used for MNIST, we use two consecutive sets of 2D convolution and max-pooling layers followed by fully-connected layers. In the architectures mentioned above, the "output" size will change depending on the expectation that we're distilling. For classification, the output size will be 10 for both datasets, while for the case of entropy, it will be 1. We use ReLU non-linearities everywhere between the hidden layers. For the final output layer, softmax is used for classification. In the case of entropy, we use an exponential activiation to ensure positivity. The student models used in our experiments use the above mentioned architectures as the base architecture. For explicitly searching the space of the student models, we use a set of width multipliers starting from the teacher architecture. The space of student architectures corresponding to each teacher model defined earlier is given below. The width multiplier values of K 1 and K 2 are determined differently for each of the experiments, and thus will be mentioned in later sections. Model and Distillation Hyper-Parameters: We run the distillation procedure using the following hyperparameters: fixed teacher learning rate η t = 4 × 10 −6 for models on MNIST and η t = 2 × 10 −6 for models on CIFAR10, teacher prior precision τ = 10, initial student learning rate α s = 10 −3, student dropout rate p = 0.5 for fully-connected models on MNIST (and zero otherwise), burn-in iterations B = 1000, thinning interval H = 100 for distilling predictive means and H = 10 for distilling entropy values, and total training iterations T = 10 6. For training the student model, we use the Adam algorithm (instead of plain steepest descent as indicated in Algorithm 1) and set a learning schedule for the student such that it halves its learning rate every 200 epochs for models on MNIST, and every 400 epochs for models on CIFAR10. Also, note that we only apply the regularization function R(φ s) while doing Group 1 / 2 pruning. Otherwise, we use dropout as indicated before. Hyper-parameters for Group 1 / 2 pruning experiments: For experiments involving group 1 / 2 regularizer, the regularization strength values λ are chosen from a log-scale ranging from 10 −8 to10−3. When using Group 1 / 2 regularizer, we do not use dropout for the student model. The number of fine-tuning epochs for models on MNIST and CIFAR100 are 600 and 800 respectively. At the start of fine-tuning, we also reinitialize the student learning rate α t = 10 −4 for fully-connected models and α t = 10 −3 for convolutional models. The magnitude threshold for pruning is = 10 −3. Supplemental Results for Experiment 2: Robustness to Uncertainty In Figure 4, we demonstrate the of Experiment 2 (Section 4.3), on fully-connected networks for MNIST. Additionally, in Tables [2- Figure 11 : Accuracy-Storage-Computation tradeoff while using CNNs on CIFAR10 with subsampling training data to 20,000 samples. (a) Test accuracy using posterior predictive distribution vs FLOPS found using exhaustive search. (b) Test accuracy using posterior predictive distribution vs FLOPS found using group 1 / 2 with pruning. (c) Test accuracy using posterior predictive distribution vs storage found using exhaustive search. (d) Test accuracy using posterior predictive distribution vs storage found using group 1 / 2 with pruning. The optimal student model for this configuration is obtained with group 1 / 2 pruning. It has approximately 5.4× the number of parameters and 5.6× the FLOPS of the base student model. | A general framework for distilling Bayesian posterior expectations for deep neural networks. | 658 | scitldr |
Variational Autoencoders (VAEs) have proven to be powerful latent variable models. How- ever, the form of the approximate posterior can limit the expressiveness of the model. Categorical distributions are flexible and useful building blocks for example in neural memory layers. We introduce the Hierarchical Discrete Variational Autoencoder (HD-VAE): a hi- erarchy of variational memory layers. The Concrete/Gumbel-Softmax relaxation allows maximizing a surrogate of the Evidence Lower Bound by stochastic gradient ascent. We show that, when using a limited number of latent variables, HD-VAE outperforms the Gaussian baseline on modelling multiple binary image datasets. Training very deep HD-VAE remains a challenge due to the relaxation bias that is induced by the use of a surrogate objective. We introduce a formal definition and conduct a preliminary theoretical and empirical study of the bias. Unsupervised learning has proven powerful at leveraging vast amounts of raw unstructured data (; ; ;). Through unsupervised learning, latent variable models learn the explicit likelihood over an unlabeled dataset with an aim to discover hidden factors of variation as well as a generative process. An example hereof, is the Variational Autoencoder (VAE) that exploits neural networks to perform amortized approximate inference over the latent variables. This approximation comes with limitations, both in terms of the latent prior and the amortized inference network . It has been proposed to go beyond Gaussian priors and approximate posterior using, for instance, autoregressive flows , a hierarchy of latent variables (Sønderby et al., 2016; Maaløe et al., 2016 Maaløe et al.,, 2019, a mixture of priors or discrete distributions (van den ; ; ; b,a;). Current state-of-the-art deep learning models are trained on web-scaled datasets and increasing the number of parameters has proven to be a way to yield remarkable . Nonetheless, time complexity and GPU memory are scarce resources, and the need for both resources increases linearly with the depth of neural network. and showed that large memory layers are an effective way to increase the capacity of a model while reducing the computation time. showed that discrete variational distributions are analogous to neural memory , which can be used to improve generative models . Also, memory values are yet another way to embed data, allowing for applications such as one-shot transfer learning and semi-supervised learning that scales . Depth promises to bring VAEs to the next frontier (Maaløe et al., 2019). However, the available computing resources may shorten that course. Motivated by the versatility and the scalability of discrete distributions, we introduce the Hierarchical Discrete Variational Autoencoder. HD-VAE is a VAE with a hierarchy of factorized categorical latent variables. In contrast to the existing discrete latent variable methods, our model (a) is hierarchical, (b) trained using Concrete/Gumbel-Softmax, (c) relies on a conditional prior that is learned end-to-end and (d) uses a variational distribution that is parameterized as a large stochastic memory layer. Despite being optimized for a biased surrogate objective we show that a shallow HD-VAE outperforms the baseline Gaussian-based models on multiple binary images datasets in terms of test log-likelihood. This motivates us to introduce a definition of the relaxation bias and to measure how it is affected by the configuration of latent variables. Hierarchical VAE Hierarchical VAEs define a model p θ (x, z) = p θ (x|z)p θ (z) where x is an observed variable and z = {z 1, . . ., z L} is a hierarchy of latent variables so that p θ (z) is factorized into L layers. The inference model q φ (z|x) usually exploits the inverse dependency structure. A vanilla hierarchical VAE in the following model: The choice of the VAE architecture is independent of the choice of the variational family and deeper models can easily be defined (see appendix F). Variational Neural Memory Each stochastic layer consists of N categorical random variables with K class probabilities π = {π 1, . . ., π K} and can be parametrized as a memory layer. recently proposed a scalable approach to attention-based memory layers that can be directly translated to the stochastic setting: Each categorical distribution is parametrized by factored keys {k 1, . . ., k K}, k i ∈ R d 1 and a parametric query model Q(h). If {v 1, ..., v K}, v i ∈ R d 2 are the memory values, for c ∈ R and i = 1,..., K, then the output of the memory layer for one variable is Optimization We wish to maximize the Evidence Lower Bound (ELBO): The subscript of L denotes the number of importance weighted samples. Guided by the analysis of Sønderby et al., we chose to use the Concrete/GumbelSoftmax relaxation for differentiable, approximate sampling of categorical variables. A relaxed categorical sample can be obtained as where {g i} are i.i.d. samples drawn from Gumbel, and τ ∈ R * + is a temperature parameter. As in the categorical case, the output of the memory layer is a convex combination of the memory values weighted by the entries ofz: The relaxed samplesz follow a Concrete/Gumbel-Softmax distribution q τ φ which depends on τ and converges to the categorical distribution q τ =0 φ = q φ as τ → 0 which is equivalent to applying the Gumbel-Max trick to soft samples, meaning z = H(z), H = one hot • arg max. When we extend the definition of f θ,φ to the domain of the relaxed samples, as in appendix D, the surrogate objective that is maximized becomes which is not guaranteed to be a lower bound of log p θ (x). Hence, we are interested in the relaxation bias that we define as: where is the original ELBO. If f θ,φ is a κ-Lipschitz for z, we can derive an upper bound for the relaxation bias as well as a new log-likelihood bound (relaxed ELBO) by adding a corrective term to the surrogate objective (derivation in appendix C). For a one layer Ladder Variational Autoencoder (LVAE), it in the following bounds: This new bound shows that, if the model is unconstrained, the relaxation bias is free to grow and that it grows with the number of discrete variables. In section 4.2, we provide empirical supporting the monotonically increasing property of the relaxation bias with regards to the number of stochastic units. Table 1: Sample estimates of the KL(q φ,θ (z|x)||p θ (z)) and the ELBO for 1000 importance weighted hard samples (τ = 0) using the same LVAE architecture and hyperparameters across all datasets. discrete normal discrete normal discrete normal discrete normal discrete normal discrete normal discrete normal discrete normal discrete normal discrete normal discrete normal discrete normal. To the best of our knowledge, HD-VAE is the only work that attempts to transform memory layers into a general purpose variational distribution. We trained HD-VAE for different number of layers of latent variables using the surrogate objective defined in the equation 5. In this experiment, we observe that HD-VAE consistently outperforms the baseline Gaussian model for multiple datasets and different number of latent layers (table 1). This shows that using variational memory layers yields a more flexible model than for the VAE with a Gaussian prior and the same number of latent variables. Furthermore, optimizing latent variable models is challenging (Sønderby et al., 2016;). In this experiment, the measured KL is higher for the discrete model, suggesting a well-tempered optimization behavior. Finally, we observe that increasing the depth of HD-VAE consistently improves on the log-likelihood, with a limit of three layers latent layers. The relaxation bias (section 2) may increase with the number of discrete latent variables. We trained HD-VAE for different numbers of stochastic units and different depths on Binarized MNIST using the surrogate objective defined in the equation 5. We measured the relaxation bias δ τ =0.1 on the test set (figure 1, table 4). The relaxation bias monotonically increases with the total number of discrete latent variables for different numbers of latent variables. This may explain why we found that HD-VAE with a large number of latent variables is not yet competitive with the Gaussian counterparts. In this preliminary research, we have introduced a design for variational memory layers and shown that it can be exploited to build hierarchical discrete VAEs, that outperform Gaussian prior VAEs. However, without explicitly constraining the model, the relaxation bias grows with the number of latent layers, which prevents us from building deep hierarchical models that are competitive with state-of-the-art methods. In future work we will attempt to harness the relaxed-ELBO to improve the performance of the HD-VAE further. Optimization During training, we mitigate the posterior collapse using the freebits strategy with λ = 2 for each stochastic layer. A dropout of 0.5 is used to avoid overfitting. We linearly decrease the temperature τ from 0.8 to 0.3 during the first 2 · 10 5 steps and from 0.3 to 0.1 during the next 2 · 10 5 steps. We use the Adamax optimizer with initial learning rate of 2 · 10 −3 for all parameters except for the memory values that are trained using a learning rate of 2 · 10 −2 to compensate for sparsity. We use a batch size of 128. All models are trained until they overfit and we evaluate the log-likelihood using 1000 importance weighted samples . Despite its large number of parameters, HD-VAE seems to be more robust to overfitting, which may be explained by the sparse update of the memory values. Runtime Sparse CUDA operations are currently not used, which means there is room to make HD-VAE more memory efficient. Even during training, one may truncate the relaxed samples to benefit from the sparse optimizations. The table 3 shows the average elapsed time training iteration as well as the memory usage for a 6 layers LVAE with 6 × 16 stochastic units and K = 16 2 and batch size of 128. Table 4: Measured one-importance-weighted ELBO on binarized MNIST for a LVAE model with different number of layers and different numbers of stochastic units using relaxed (τ = 0.1) and hard samples (τ = 0). We report N = L l=1 n l, where n l relates to the number of latent variables at the layer l and we set K = 256 for all the variables. Let x be an observed variable, and consider a VAE model with one layer of N categorical latent variables z = {z 1, . . ., z N} each with K classes. The generative model is p θ (x, z) and the inference model is q φ (z|x). For a temperature parameter τ > 0, the equivalent relaxed concrete variables are denotedẑ = {ẑ 1, . . .,ẑ N},ẑ i ∈ K. We define H = one hot • arg max and , using the Gumbel-Max trick, one can notice that We now assume that f θ,φ,x is κ-Lipschitz for L 2. Then, by definition, The relaxation bias can therefore be bounded as follows: Furthermore, we can define the adjusted Evidence Lower Bound for relaxed categorical variables (relaxed-ELBO): As shown by the experiment presented in the section 4.2, the quantity L τ >0 1 (θ, φ) appears to be a positive quantity. Furthermore, as the model attempts to exploit the relaxation of z to maximize the surrogate objective, one may consider that is a tight bound of δ τ (θ, φ), meaning that the relaxed-ELBO is a tight lower bound of the ELBO. The relaxed-ELBO is differentiable and may enable automatic control of the temperature as left and right terms of the relaxed-ELBO seek respectively seek for high and low temperature. κ-Lipschitz neural networks can be designed using Weight Normalization or Spectral Normalization . Nevertheless handling residual connections and multiple layers of latent variables is not trivial. We note however that in the case of a one layer VAE, one only needs to constrain the VAE decoder to be κ-Lispchitz as the surrogate objective is computed as In the appendix E, we show how the relaxed-ELBO can be extended to multiple layers of latent variables in the LVAE setting. Appendix D. Defining f θ,φ on the domain of the relaxed Categorical Variablesz f θ,φ is only defined for categorical samples. For relaxed samplesz, we define f θ,φ as:. The introduction of the function H is necessary as the terms (b) and (c) are only defined for categorical samples. This expression remains valid for hard samplesz. During training, relaxing the expressions (b) and (c) can potentially yield gradients of lower variance. In the case of a single categorical variable z described by the set of K class probabilities π = {π 1, ...π K}. One can define: Alternatively, asides from being a relaxed Categorical distribution, the Concrete/GumbelSoftmax also defines a proper continuous distribution. When treated as such, this in a proper probabilistic model with continuous latent variables, and the objective is unbiased. In that case, the density is given by We consider now an LVAE model: In the following, we will leave the conditioning on x implicit for convenience. The ELBO estimated with relaxed samples (relaxed-ELBO) is: The correct ELBO can be rewritten as follows: In this section we define the VAE (; ;), the LVAE (Sønderby et al., 2016) and BIVA (Maaløe et al., 2019). All models are characterized by a generative model p θ (x, z) = p θ (x|z)p θ (z) and can be coupled with any variational distribution. Variational Autoencoder (VAE) Variational Autoencoder with Skip-Connections (Skip-VAE) Ladder Variational Autoencoder (LVAE) Ladder Variational Autoencoder with Skip-Connections (Skip-LVAE) Bidirectional Variational Autoencoder (BIVA) | In this paper, we introduce a discrete hierarchy of categorical latent variables that we train using the Concrete/Gumbel-Softmax relaxation and we derive an upper bound for the absolute difference between the unbiased and the biased objective. | 659 | scitldr |
In this paper, we propose a novel technique for improving the stochastic gradient descent (SGD) method to train deep networks, which we term \emph{PowerSGD}. The proposed PowerSGD method simply raises the stochastic gradient to a certain power $\gamma\in$ during iterations and introduces only one additional parameter, namely, the power exponent $\gamma$ (when $\gamma=1$, PowerSGD reduces to SGD). We further propose PowerSGD with momentum, which we term \emph{PowerSGDM}, and provide convergence rate analysis on both PowerSGD and PowerSGDM methods. Experiments are conducted on popular deep learning models and benchmark datasets. Empirical show that the proposed PowerSGD and PowerSGDM obtain faster initial training speed than adaptive gradient methods, comparable generalization ability with SGD, and improved robustness to hyper-parameter selection and vanishing gradients. PowerSGD is essentially a gradient modifier via a nonlinear transformation. As such, it is orthogonal and complementary to other techniques for accelerating gradient-based optimization. Stochastic optimization as an essential part of deep learning has received much attention from both the research and industry communities. High-dimensional parameter spaces and stochastic objective functions make the training of deep neural network (DNN) extremely challenging. Stochastic gradient descent (SGD) is the first widely used method in this field. It iteratively updates the parameters of a model by moving them in the direction of the negative gradient of the objective evaluated on a mini-batch. Based on SGD, other stochastic optimization algorithms, e.g., SGD with Momentum (SGDM) , AdaGrad , RMSProp , Adam are proposed to train DNN more efficiently. Despite the popularity of Adam, its generalization performance as an adaptive method has been demonstrated to be worse than the non-adaptive ones. Adaptive methods (like AdaGrad, RMSProp and Adam) often obtain faster convergence rates in the initial iterations of training process. Their performance, however, quickly plateaus on the testing data . , the authors provided a convex optimization example to demonstrate that the exponential moving average technique can cause non-convergence in the RMSProp and Adam, and they proposed a variant of Adam called AMSGrad, hoping to solve this problem. The authors provide a theoretical guarantee of convergence but only illustrate its better performance on training data. However, the generalization ability of AMSGrad on test data is found to be similar to that of Adam, and a considerable performance gap still exists between AMSGrad and SGD . Indeed, the optimizer is chosen as SGD (or with Momentum) in several recent state-of-the-art works in natural language processing and computer vision , where in these instances SGD does perform better than adaptive methods. Despite the practical success of SGD, obtaining sharp convergence in the non-convex setting for SGD to efficiently escape saddle points (i.e., convergence to second-order stationary points) remains a topic of active research . Related Works: SGD, as the first efficient stochastic optimizer for training deep networks, iteratively updates the parameters of a model by moving them in the direction of the negative gradient of the objective function evaluated on a mini-batch. SGDM brings a Momentum term from the physical perspective, which obtains faster convergence speed than SGD. The Momentum idea can be seen as a particular case of exponential moving average (EMA). Then the adaptive learning rate (ALR) technique is widely adopted but also disputed in deep learning, which is first introduced by AdaGrad. Contrast to the SGD, AdaGrad updates the parameters according to the square roots of the sum of squared coordinates in all the past gradients. AdaGrad can potentially lead to huge gains in terms of convergence when the gradients are sparse. However, it will also lead to rapid learning rate decay when the gradients are dense. RMSProp, which first appeared in an unpublished work , was proposed to handle the aggressive, rapidly decreasing learning rate in AdaGrad. It computes the exponential moving average of the past squared gradients, instead of computing the sum of the squares of all the past gradients in AdaGrad. The idea of AdaGrad and RMSProp propelled another representative algorithm: Adam, which updates the weights according to the mean divided by the root mean square of recent gradients, and has achieved enormous success. Recently, research to link discrete gradient-based optimization to continuous dynamic system theory has received much attention . While the proposed optimizer excels at improving initial training, it is completely complementary to the use of learning rate schedules . We will explore how to combine learning rate schedules with the PoweredSGD optimizer in future work. While other popular techniques focus on modifying the learning rates and/or adopting momentum terms in the iterations, we propose to modify the gradient terms via a nonlinear function called the Powerball function by the authors of. , the authors presented the basic idea of applying the Powerball function in gradient descent methods. In this paper, we 1) systematically present the methods for stochastic optimization with and without momentum; 2) provide convergence proofs; 3) include experiments using popular deep learning models and benchmark datasets. Another related work was presented in , where the authors presented a version of stochastic gradient descent which uses only the signs of gradients. This essentially corresponds to the special case of PoweredSGD (or PoweredSGDM) when the power exponential γ is set to 0. We also point out that despite the name resemblance, the power PowerSign optimizer proposed in is a conditional scaling of the gradient, whereas the proposed PoweredSGD optimizer applies a component-wise trasformation to the gradient. Inspired by the Powerball method in , this paper uses Powerballbased stochastic optimizers for the training of deep networks. In particular, we make the following major contributions: 1. We propose the PoweredSGD, which is the first systematic application of the Powerball function technique in stochastic optimization. PoweredSGD simply applies the Powerball function (with only one additional parameter γ) on the stochastic gradient term in SGD. Hence, it is easy to implement and requires no extra memory. We also propose the PoweredSGDM as a variant of PoweredSGD with momentum to further improve its convergence and generalization abilities. 2. We have proved the convergence rates of the proposed PoweredSGD and PoweredSGDM. It has been shown that both the proposed PoweredSGD and PoweredSGDM attain the best known rates of convergence for SGD and SGDM on non-convex functions. In fact, to the knowledge of the authors, the bounds we proved for SGD and SGDM (as special cases of PoweredSGD and PoweredSGDM when γ = 1) provide the currently best convergence bounds for SGD and SGDM in the non-convex setting in terms of both the constants and rates of convergence (see, e.g.). 3. Experimental studies are conducted on multiple popular deep learning tasks and benchmark datasets. The empirically demonstrate that our methods gain faster convergence rate especially in the early train process compared with the adaptive gradient methods. Meanwhile, the proposed methods show comparable generalization ability compared with SGD and SGDM. Outline: The remainder of the paper is organized as below. Section 2 proposes the PoweredSGD and PoweredSGDM algorithms. Section 3 provides convergence of the proposed algorithms for non-convex optimization. Section 4 gives the experiment of the proposed algorithms on a variety of models and datasets to empirically demonstrate their superiority to other optimizers. Finally, are drawn in section 5. Notation: Given a vector a ∈ R n, we denote its i-th coordinate by a i; we use a to denote its 2-norm (Euclidean norm) and a p to denote its p-norm for p ≥ 1. Given two vectors a, b ∈ R n, we use a · b to denote their inner product. We denote by E[·] the expectation with respect to the underlying probability space. In this section, we present the main algorithms proposed in this paper: PoweredSGD and PoweredS-GDM. PoweredSGD combines the Powerball function technique with stochastic gradient descent, and PoweredSGDM is an extension of PoweredSGD to include a momentum term. We shall prove in Section 3 that both methods converge and attain at least the best known rates of convergence for SGD and SGDM on non-convex functions, and demonstrate in Section 4 the advantages of using PoweredSGD and PoweredSGDM compared to other popular stochastic optimizers for train deep networks. Train a DNN with n free parameters can be formulated as an unconstrained optimization problem where f (·): R n → R is a function bounded from below. SGD proved itself an efficient and effective solution for high-dimensional optimization problems. It optimizes f by iteratively updating the parameter vector x t ∈ R n at step t, in the opposite direction of a stochastic gradient g(x t, ξ t) (where ξ t denotes a random variable), which is calculated on t-th mini-batch of train dataset. The update rule of SGD for solving problem is starting from an arbitrary initial point x 1, where α t is known as the learning rate at step t. In the rest of the article, let g t = g(x t, ξ t) for the sake of notation. We then introduce a nonlinear transformation σ γ (z) = sign(z)|z| γ named as the Powerball function where sign(z) returns the sign of z, or 0 if z = 0. For any vector z = (z 1, . . ., z n) T, the Powerball function σ γ (z) is applied to all elements of z. A parameter γ ∈ R is introduced to adjust the mechanism and intensity of the Powerball function. Applying the Powerball function to the stochastic gradient term in the update rule gives the proposed PoweredSGD algorithm: where γ ∈ is an additional parameter. Clearly, when γ = 1, we obtain the vanilla SGD. The detailed pseudo-code of the proposed PoweredSGD is presented in Algorithm 1. The momentum trick inspired by physical processes; has been successfully combined with SGD to give SGDM, which almost always gives better convergence rates on train deep networks. We hereby follow this line to propose the PoweredSGD with Momentum (PoweredSGDM), whose update rule is Clearly, when β = 0, PowerSDGM reduces to PoweredSGD. Pseudo-code of the proposed PoweredSGDM is detailed in Algorithm 2. In this section, we present convergence of PoweredGD and PoweredSGDM in the non-convex setting. We start with some standard technical assumptions. First, we assume that the gradient of the objective function f is L-Lipschitz. We then assume that a stochastic first-order black-box oracle is accessible as a noisy estimate of the gradient of f at any point x ∈ R n, and the variance of the noise is bounded. Assumption 3.2 The stochastic gradient oracle gives independent and unbiased estimate of the gradient and satisfies: whereσ ≥ 0 is a constant. We will be working with a mini-batch size in the proposed PoweredSGD and PoweredSGDM. Let n t be the mini-batch size at the t-th iteration and the corresponding mini-batch stochastic gradient be given by the average of n t calls to the above oracle. Then by Assumption 3.2 we can show that In other words, we can reduce variance by choosing a larger mini-batch size (see Supplementary Material A.2). We now state the main convergence for the proposed PoweredSGD. Theorem 3.1 Suppose that Assumptions 3.1 and 3.2 hold. Let T be the number of iterations. PoweredSGD with an adaptive learning rate and mini-batch size B t = T (independent of a particular step t) can lead to where ε ∈, p = 1+γ 1−γ for any γ ∈ and p = ∞ for γ = 1. The proof of Theorem 3.1 can be found in the Supplementary Material A.2. Remark 3.1 The proposed PoweredSGD and PoweredSGDM have the potential to outperform popular stochastic optimizers by allowing the additional parameter γ that can be tuned for different training cases, and they always reduce to other optimizers when setting γ = 1. Remark 3.2 We leave ε ∈ to be a free parameter in the bound to provide trade-offs between bounds given by the curvature L and stochasticityσ. Ifσ = 0, we can choose ε → 0 and recover the convergence bound for PoweredGD (see Supplementary Material A.1). The above theorem provides a sharp estimate of the convergence of PoweredSGD in the following sense. When γ = 1, the convergence bound reduces to the best known convergence rate for SGD. Note that, because of the choice of batch size, it requires T 2 gradient evaluations in T iterations. So the convergence rate is effectively O(1/ √ T). This is the best known rate of convergence for. Whenσ = 0 (i.e., exact gradients are used and B t = 1), PoweredSGD can attain convergence in the order O(1/T), which is consistent with the convergence rate of gradient descent. We now present convergence analysis for PoweredSGDM. The proof is again included in the Supplementary Material B.2 due to the space limit. Theorem 3.2 Suppose that Assumptions 3.1 and 3.2 hold. Let T be the number of iterations. For any β ∈, PoweredSGDM with an adaptive learning rate and mini-batch size B t = T (independent of a particular step t) can lead to where ε ∈, p = 1+γ 1−γ for any γ ∈ and p = ∞ for γ = 1. Remark 3.4 Convergence analysis of stochastic momentum methods for non-convex optimization is an important but under-explored topic. While our on convergence analysis do not improve the rate of convergence for stochastic momentum methods in a non-convex setting, it does match the currently best known rate of convergence in special cases (γ = 0, 1) and offers very concise upper bounds in terms of the constants. The upper bound continuously interpolates the convergence rate for γ varying in and β varying in. The key technical that made the of Theorems 3.1 and 3.2 possible is Lemma B.1 in the Supplementary Material, which provide a tight estimate of accumulated momentum terms. We also note that the convergence rates for γ ∈ are entirely new and not reported elsewhere before. Even for the special case of γ = 0, 1, our proof differs from that of and seems more transparent. Remark 3.5 A large mini-batch (B t = T) is assumed for the convergence to hold. This is consistent with the convergence analysis in for the special case γ = 0. We assume this because it enables us to put analysis of PoweredGD and PoweredSGD in a unified framework so that we can obtain tighter bounds. In the stochastic setting, similar to Remark 3.3, we note that our proof requires T 2 gradient calls in T iterations and hence the effective convergence rate is O(1/ √ T), which is consistent with the known rate of convergence for SGD . The propose of this section is to demonstrate the efficiency and effectiveness of the proposed PoweredSGD and PoweredSGDM algorithms. We conduct experiments of different model architectures on datasets in comparison with widely used optimization methods including the non-adaptive method SGDM and three popular adaptive methods: AdaGrad, RMSprop and Adam. This section is mainly composed of two parts: the convergence and generalization experiments and the Powerball feature experiments. The setup for each experiment is detailed in Table 1 1. In the first part, we present empirical study of different deep neural network architectures to see how the proposed methods behave in terms of convergence speed and generalization. In the second part, the experiments are conducted to explore the potential features of PoweredSGD and PoweredSGDM. To ensure stability and reproducibility, we conduct each experiment at least 5 times from randomly initializations and the average are shown. The settings of hyper-parameters of a specific optimization method that can achieve the best performance on the test set are chosen for comparisons. When two settings achieve similar test performance, the setting which converges faster is adopted. We can have the following findings from our experiments: The proposed PoweredSGD and PoweredSGDM methods exhibit better convergence rate than other adaptive methods such as Adam and RMSprop. Our proposed methods achieve better generalization performance than adaptive methods although slightly worse than SGDM. Table 1: Summaries of the models and datasets in our experiments. Since the initial learning rate has a large impact on the performances of optimizers, we implement a logarithmically-spaced grid search strategy around the default learning rate for each optimization method, and leave the other hyper-parameters to their default settings. The default learning rate for SGDM is 0.01. We tune the learning rate on a logarithmic scale from {1, 0.1, 0.01, 0.001, 0.0001}. The momentum value in all experiments is set to default value 0.9. PoweredSGD, PoweredSGDM: The learning rates for PoweredSGD and PoweredSGDM are chosen from the same range {1, 0.1, 0.01, 0.001, 0.0001} as SGDM. The momentum value for PoweredS-GDM is also 0.9. Note that γ = 1 in Powerball function corresponds to the SGD or SGDM. Based on extensive experiments, we empirically tune γ from {0.5, 0.6, 0.7, 0.8, 0.9}. AdaGrad: The learning rates for AdaGrad are {1e-1, 5e-2, 1e-2, 5e-3, 1e-3} and we choose 0 for the initial accumulator value. RMSprop, Adam: Both have the default learning rate 1e-3 and their learning rates are searched from {1e-2, 5e-3, 1e-3, 5e-4, 1e-4}. The parameters β 1, β 2 and the perturbation value ε are set to default. As previous findings show, adaptive methods generalize worse than non-adaptive methods and carefully tuning the initial learning rate yields significant improvements for them. To better compare with adaptive methods, once we have found the value that was best performing in adaptive methods, we would try the learning rate between the best learning rate and its closest neighbor. For example, if we tried learning rates {1e-2, 5e-3, 1e-3, 5e-4, 1e-4} and 1e-4 was best performing, we would try the learning rate 2e-4 to see if performance was improved. We iteratively update the learning rate until performance could not be improved any more. For all experiments, we used a mini-batch size of 128. Fig. 1 shows the learning curves of three experiments we have conducted to observe the performance of PoweredSGD and PoweredSGDM in comparison with other widely-used optimization methods. ResNet-50 on CIFAR-10: We trained a ResNet-50 model on CIFAR-10 and our are shown in Fig. 1(a) and Fig. 1(b). We ran each experiment for a fixed budget of 160 epochs and reduced the learning rate by a factor of 10 after every 60 epochs. As the figure shows, the adaptive methods converged fast and appeared to be performing better than the non-adaptive method SGDM as expected. WideResNet on CIFAR-100: Next, we conducted experiments on the CIFAR-100 dataset using WideResNet model. The fixed budget here is 120 epochs and the learning rate reduces by a factor of 10 after every 60 epochs. The are shown in Fig. 1(e) and Fig. 1(f). The performance of the PoweredSGD and PoweredSGDM are still promising in both the train set and test set. PoweredSGD, PoweredSGDM and AdaGrad had the fastest initial progress. In the test set, PoweredSGD and PoweredSGDM had much better test accuracy than all other adaptive methods. ResNet-50 on ImageNet: Finally, we conducted experiments on the ImageNet dataset using ResNet-50 model. The fixed budget here is 120 epochs and the learning rate reduces by a factor of 10 after every 30 epochs. The are shown in Fig. 1(i) and Fig. 1(j). We observed that PoweredSGD and PoweredSGDM gave better convergence rates than adaptive methods while AdaGrad quickly plateaus due to too many parameter updates. For test set, we can notice that although SGDM achieved the best test accuracy of 76.27%, PoweredSGD and PoweredSGDM gave the of 73.71% and 73.96%, which were better than those of adaptive methods. Additional experiments (DenseNet-121 on CIFAR-10 and ResNeXt on CIFAR100) are shown in Fig. 1(c In deep learning, the phenomenon of gradient vanishing poses difficulties in training very deep neural networks by SGD. During the training process, the stochastic gradients in early layers can be extremely small due to the chain rule, and this can even completely stop the networks from being trained. Our proposed PoweredSGD method can relieve the phenomenon of gradient vanishing by effectively rescaling the stochastic gradient vectors. To validate this, we conduct experiments on the MNIST dataset by using a 13-layer fully-connected neural network with ReLU activation functions. The SGD and proposed PoweredSGD are compared in terms of train accuracy and 1-norm of gradient vector. As can be observed in Fig. 2 proposed the so-called Powerball accelerated gradient descent algorithm, which was updated as follows, The authors of Theorem A.1 Suppose that Assumption 3.1 holds. The PoweredGD scheme can lead to where T is the number of iterations and p = 1+γ 1−γ for any γ ∈ and p = ∞ for γ = 1. Proof: Denote by x the minimizer and f = f (x). Then, by the L-Lipschitz continuity of ∇ f and, Let By Hölder's inequality, for γ ∈ and with p = 1+γ 1−γ and q = 1+γ 2γ, we have It follows that which, by a telescoping sum, gives where 1 is vector with entries all given by 1. It is easy to see that the estimate is also valid for γ = 1 with p = ∞ and for γ = 0. The proof is complete. To analyze the convergence of PoweredSGD, we need some preliminary on the relation between mini-batch size and variance reduction of SGD. Let ∇ f (x) be the gradient of f at x ∈ R n. Suppose that we use the average of m calls to the stochastic gradient oracle, denoted by g(x, ξ i) (i = 1, · · ·, m), to estimate ∇ f (x). By Assumption 3.2, we have where in the second equality we used the fact that g(x, ξ i) (i = 1, · · ·, m) are drawn independently and all give unbiased estimate of ∇ f (x) (provided by Assumption 3.2). Now we are ready to present the proof of Theorem 3.1. Proof: By the L-Lipschitz continuity of ∇ f and, Fix any iteration number T > 1 and let ε ∈ to be chosen. We can estimate where the last inequality followed from the elementary inequality 2ab ≤ εa 2 + 1 ε b 2 for any positive real number ε and real numbers a, b. Substituting this into gives By the same argument in the proof for Theorem A.1, we can derive Taking conditional expectation from both sizes gives where σ 2 t is the variance of the t-th stochastic gradient approximation computed using the chosen mini-batch size B t = T, which therefore satisfies σ 2 t ≤σ 2 T. Taking expectation from both sides and performing a telescoping sum give The proof is complete. We first analyze the deterministic version of PoweredSGDM (denoted by PoweredGDM). The update rule for PoweredGDM is where β ∈ is a momentum constant and v 0 = 0. Clearly, when β = 0, the scheme also reduces to PoweredGD. Theorem B.1 Suppose that Assumption 3.1 holds. For any β ∈, the PoweredGDM scheme with an adaptive learning rate can lead to where T is the number of iterations and p = 1+γ 1−γ for any γ ∈ and p = ∞ for γ = 1. Proof: Let z t = x t + β 1−β v t. It can be verified that the PoweredGDM scheme satisfies By the L-Lipschitz continuity of ∇ f and, We can estimate where ε > 0 is to be chosen. By the L-Lipschitz continuity of ∇ f, Lemma B.1 For T ≥ 1, we have Proof: It is easy to show by induction that, for t ≥ 1, Indeed, we have v 1 = 0 and v 2 = −α 1 σ (∇ f (x 1)). Suppose that the above holds for t ≥ 1. Then By Lemma B.1, inequalities,, and a telescoping sum on, we get It is clear that ε = (1−β) 2 Lβ would minimize the bound on the right-hand side (among different choices of ε > 0) and give For any β ∈, we can choose (1−β) 2 1+β so that the bound reduces to which immediately gives the bound in the theorem by noting z 1 = x 1. Proof: The proof is built on that for Theorem B.1. With z t = x t + β 1−β v t, it can be verified that the PoweredSGDM scheme satisfies By the L-Lipschitz continuity of ∇ f and, Similar to the proof of Theorem B.1, we can estimate where ε 1 > 0 is to be chosen. By the L-Lipschitz continuity of ∇ f, Similar to Lemma B.1, we obtain We can also bound where ε > 0. By inequalities-, and a telescoping sum on, we get Setting ε 1 = (1−β) 2 Lβ and choosing (1−β) 2 1+β lead to which, by taking expectation from both sides and by the same argument in the proof for Theorem A.1, leads to which immediately gives the bound in the theorem by noting z 1 = x 1. Remark B.1 Clearly, Theorem 3.2 exactly reduces to Theorem B.1 whenσ = 0 and ε → 0. Moreover, when β = 0, Theorem 3.2 reduces exactly to Theorem 3.1. This in a sense shows that our estimates are sharp. A careful reader will notice that in Theorems 3.1 and 3.2, our estimates of convergence rates for PoweredSGD and PoweredSGDM, respectively, are in terms of the stochastic gradients g t. We now show that this is without loss of generality in view of Assumption 3.2. When γ = 1, we have where in the last equality, we used Assumption 3.2. This would imply ]. When γ ∈, by the equivalence of norm in R n, there exist positive constants C γ and D γ such that for all x ∈ R n. Hence which implies that ]. In other words, the estimates are equivalent (modulo a constant factor). We prefer the versions in Theorems 3.1 and 3.2, because the bounds are more elegant. The vanishing gradient problem is quite common when training deep neural networks using gradientbased methods and backpropagation. The gradients can become too small for updating the weight values. Eventually, this may stop the networks from further training. The Powerball function can help amplify the gradients especially when they approach zero. We visualized the amplification effects of Powerball function in Fig. 3. Thus, the attributes of PoweredSGD can help alleviate the vanishing gradient problem to some extent. We investigated the actual performance of PoweredSGD and SGD when dealing with very deep networks. We trained deep networks on the MNIST dataset using PoweredSGD with γ chosen from {0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0} and learning rate η chosen from {1.0, 0.1, 0.01, 0.001}. When γ = 1.0, the PoweredSGD becomes the vanilla SGD. The architecture of network depth which ranges from 12 to 15 with ReLU as the activation function is shown in Table 2. The are visualized using heatmaps in Fig. 4. 784 → 256 Hidde layers (×10/11/12/13) 256 → 256 Output layer 256 → 10 Table 2: The architecture of MLP in vanishing gradient experiments. As we can observe in the visualisation, when the network depth is more than 13 layers, increasing or decreasing the learning rate of SGD could not solve the vanishing gradient problem. For PoweredSGD, the usage of the Powerball function enables it to amplify the gradients and thus allows to further train deep networks with proper γ settings. This confirms our hypothesis that PoweredSGD helps alleviate the vanishing gradient problem to some extent. We also note that, when the network increases to 15 layers, both SGD and PoweredSGD could not train the network further. We speculate that this is due to the ratio of amplified gradients to the original gradients becomes too large (see Fig. 3) and a much smaller learning rate is needed (this is also consistent with the change of theoretical learning rates suggested in the convergence proofs as the gradient size decreases). Since PoweredSGD is essentially a gradient modifier, it would also be interesting to see how to combine it with other techniques for dealing with the vanishing gradient problem. Since PoweredSGD also reduces the gradient when the gradient size is large, it may also help alleviate the exploding gradient problem. This gives another interesting direction for future research. The Powerball function is a nonlinear function with a tunable hyper-parameter γ applied to gradients, which is introduced to accelerate optimization. To test the robustness of different γ, we trained ResNet-50 and DenseNet-121 on the CIFAR-10 dataset with PoweredSGD and SGDM. The parameter γ is chosen from {0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0} and the learning rate is chosen from {1.0, 0.1, 0.01, 0.001}. The PoweredSGD becomes the vanilla SGD when γ = 1. The maximum test accuracy is recorded and the are visualized in Fig. 5. Although the γ that gets the best test accuracy depends on the choice of learning rates, we can observe that γ can be selected within a wide range from 0.5 to 1.0 without much loss in test accuracy. Moreover, the Powerball function with a hyper-parameter γ could help regularize the test performance while the learning rate decreases. For example, when η = 0.001 and γ = 0.6, PoweredSGD get the best test accuracy of 90.06% compared with 79.87% accuracy of SGD. We also compare the convergence performance of different γ choice in Fig. 6. The training loss is recorded when training ResNet-50 on CIFAR-10 dataset. As the initial learning rate decreases, the range from which the hyper-parameter γ can be selected to accelerate training becomes wider. As a practical guide, γ = 0.8 seems a proper setting in most cases. It is again observed that the choice of γ in the range of 0.4-0.8 seems to provide improved robustness to the change of learning rates. Figure 5: Effects of different γ on test accuracy. We show the best Top-1 accuracy on CIFAR-10 dataset of ResNet-50 and DenseNet121 trained with PoweredSGD. Although the best choice of γ depends on learning rates, the selections can be quite robust considering the test accuracy. In the main part of the paper, we demonstrated through multiple experiments that PoweredSGD can achieve faster initial training. In this section we demonstrate that PoweredSGD as a gradient modifier is orthogonal and complementary to other techniques for improved learning. The learning rate is the most important hyper-parameter to tune for deep neural networks. Motivated by recent advancement in designing learning rate schedules such as CLR policies and SGDR , we conducted some preliminary experiments on combining learning rate schedules with PoweredSGD to improve its performance. The are shown in Fig. 7. The selected learning rate schedule is warm restarts introduced in , which reset the learning rate to the initial value after a cycle of decaying the learning rate with a Figure 6: Effects of different γ on convergence. We show the best train loss on CIFAR-10 dataset of ResNet-50 trained with PoweredSGD. While the γ which achieves the best convergence performance is closely related to the choice of learning rates, a γ chosen in the range of 0.4-0.6 seem to provide better robustness to change of learning rates. cosine annealing for each batch. In Fig. 7, SGD with momentum combined with warm restarts policy is named as SGDR. Similarly, PoweredSGDR indicates PoweredSGD combined with a warm restarts policy. The hyper-parameter setting is T 0 = 10 and T mult = 2 for warm restarts. We test their performance on CIFAR-10 dataset with ResNet-50. The showed that the learning rate policy can improve both the convergence and test accuracy of PoweredSGD. Indeed, PoweredSGDR achieved the lowest training error compared with SGDM and SGDR. The test accuracy for PoweredSGDR was also improved from the 94.12% accuracy of PoweredSGD to 94.64%. The demonstrate that the nonlinear transformation of gradients given by the Powerball function is orthogonal and complementary to existing methods. As such, its combination with other techniques could potentially further improve the performance. Figure 8 below, in which the hyper-parameters that lead to the best test accuracy are chosen and can be found in Table 3. Table 3: Hyper-parameter settings for experiments shown in Figure 8. | We propose a new class of optimizers for accelerated non-convex optimization via a nonlinear gradient transformation. | 660 | scitldr |
We aim to build complex humanoid agents that integrate perception, motor control, and memory. In this work, we partly factor this problem into low-level motor control from proprioception and high-level coordination of the low-level skills informed by vision. We develop an architecture capable of surprisingly flexible, task-directed motor control of a relatively high-DoF humanoid body by combining pre-training of low-level motor controllers with a high-level, task-focused controller that switches among low-level sub-policies. The ing system is able to control a physically-simulated humanoid body to solve tasks that require coupling visual perception from an unstabilized egocentric RGB camera during locomotion in the environment. Supplementary video link: https://youtu.be/fBoir7PNxPk In reinforcement learning (RL), a major challenge is to simultaneously cope with high-dimensional input and high-dimensional action spaces. As techniques have matured, it is now possible to train high-dimensional vision-based policies from scratch to generate a range of interesting behaviors ranging from game-playing to navigation BID17 BID32 BID41. Likewise, for controlling bodies with a large number of degrees of freedom (DoFs), in simulation, reinforcement learning methods are beginning to surpass optimal control techniques. Here, we try to synthesize this progress and tackle high-dimensional input and output at the same time. We evaluate the feasibility of full-body visuomotor control by comparing several strategies for humanoid control from vision. Both to simplify the engineering of a visuomotor system and to reduce the complexity of taskdirected exploration, we construct modular agents in which a high-level system possessing egocentric vision and memory is coupled to a low-level, reactive motor control system. We build on recent advances in imitation learning to make flexible low-level motor controllers for high-DoF humanoids. The motor skills embodied by the low-level controllers are coordinated and sequenced by the high-level system, which is trained to maximize sparse task reward. Our approach is inspired by themes from neuroscience as well as ideas developed and made concrete algorithmically in the animation and robotics literatures. In motor neuroscience, studies of spinal reflexes in animals ranging from frogs to cats have led to the view that locomotion and reaching are highly prestructured, enabling subcortical structures such as the basal ganglia to coordinate a motor repertoire; and cortical systems with access to visual input can send low complexity signals to motor systems in order to evoke elaborate movements BID7 BID1 BID9.The study of "movement primitives" for robotics descends from the work of BID16. Subsequent research has focused on innovations for learning or constructing primitives for control of movments BID15 BID20 ), deploying and sequencing them to solve tasks BID36 BID19 BID22, and increasing the complexity of the control inputs to the primitives BID31. Particularly relevant to our cause is the work of BID21 in which primitives were coupled by reinforcement learning to external perceptual inputs. Research in the animation literature has also sought to produce physically simulated characters capable of distinct movements that can be flexibly sequenced. This ambition can be traced to the virtual stuntman BID6 a) and has been advanced markedly in the work of Liu BID27. Further recent work has relied on reinforcement learning to schedule control policies known as "control fragments", each one able to carry out only a specialized short movement segment BID24. In work to date, such control fragments have yet to be coupled to visual input as we will pursue here. From the perspective of the RL literature BID38, motor primitives and control fragments may be considered specialized instantiations of "option" sub-policies. Our work aims to contribute to this multi-disciplinary literature by demonstrating concretely how control-fragment-like low-level movements can be coupled to and controlled by a vision and memory-based high-level controller to solve tasks. Furthermore, we demonstrate the scalability of the approach to greater number of control fragments than previous works. Taken together, we demonstrate progress towards the goal of integrated agents with vision, memory, and motor control. We present a system capable of solving tasks from vision by switching among low-level motor controllers for the humanoid body. This scheme involves a general separation of control where a low-level controller handles motor coordination and a high-level controller signals/selects lowlevel behavior based on task context (see also BID12 BID33 . In the present work, the low-level motor controllers operate using proprioceptive observations, and the high-level controller operate using proprioception along with first-person/egocentric vision. We first describe the procedure for creating low-level controllers from motion capture data, then describe and contrast multiple approaches for interfacing the high-and low-level controllers. For simulated character control, there has been a line of research extracting humanoid behavior from motion capture ("mocap") data. The SAMCON algorithm is a forward sampling approach that converts a possibly noisy, kinematic pose sequence into a physical trajectory. It relies on a beamsearch-like planning algorithm BID26 that infers an action sequence corresponding to the pose sequence. In subsequent work, these behaviors have been adapted into policies BID27 BID2. More recently, RL has also been used to produce time-indexed policies which serve as robust tracking controllers BID34. While the ing time-indexed policies are somewhat less general as a , time-indexing or phase-variables are common in the animation literature and also employed in kinematic control of characters BID14. We likewise use mocap trajectories as reference data, from which we derive policies that are single purpose -that is, each policy robustly tracks a short motion capture reference motion (2-6 sec), but that is all each policy is capable of. Humanoid body We use a 56 degree-of-freedom (DoF) humanoid body that was developed in previous work, a version of which is available with motion-capture playback in the DeepMind control suite BID39. Here, we actuate the joints with position-control: each joint is given an actuation range in [−1, 1], and this is mapped to the angular range of that joint. Single-clip tracking policies For each clip, we train a policy π θ (a|s, t) with parameters θ such that it maximizes a discounted sum of rewards, r t, where the reward at each step comes from a custom scoring function (see eqns. 1, 2 defined immediately below). This tracking approach most closely follows BID34. Note that here the state optionally includes a normalized time t that goes from 0 at the beginning of the clip to 1 at the end of the clip. For cyclical behaviors like locomotion, a gait cycle can be isolated manually and kinematically blended circularly by weighted Illustration of tracking-based RL training. Training iteratively refines a policy to robustly track the reference trajectory as well as physically feasible.linear interpolation of the poses to produce a repeating walk. The time input is reset each gaitcycle (i.e. it follows a sawtooth function). As proposed in BID34, episodes are initialized along the motion capture trajectory; and episodes can be terminated when it is determined that the behavior has failed significantly or irrecoverably. Our specific termination condition triggers if parts of the body other than hands or feet make contact with the ground. See FIG0 for a schematic. We first define an energy function most similar to SAMCON's BID26: DISPLAYFORM0 where E qpos is a energy defined on all joint angles, E qvel on joint velocities, E ori on the body root (global-space) quaternion, E ee on egocentric vectors between the root and the end-effectors (see), E vel on the (global-space) translational velocities, and E gyro on the body root rotational velocities. More specifically: DISPLAYFORM1 where q represents the pose and q represents the reference pose. In this work, we used coefficients w qpos = 5, w qvel = 1, w ori = 20, w gyro = 1, w vel = 1, w ee = 2. We tuned these by sweeping over parameters in a custom implementation of SAMCON (not detailed here), and we have found these coefficients tend to work fairly well across a wide range of movements for this body. From the energy, we write the reward function: DISPLAYFORM2 where w total is the sum of the per energy-term weights and β is a sharpness parameter (β = 10 throughout). Since all terms in the energy are non-negative, the reward is normalized r t ∈ with perfect tracking giving a reward of 1 and large deviations tending toward 0.Acquiring reference data features for some quantities required setting the body to the pose specified by the joint angles: e.g., setting x pos, q pos, and q ori to compute the end-effector vectors q ee. Joint angle velocities, root rotational velocities, and translational velocities (q vel, q gyro, x vel) were derived from the motion capture data by finite difference calculations on the corresponding positions. Note that the reward function here was not restricted to egocentric features -indeed, the velocity and quaternion were non-egocentric. Importantly, however, the policy received exclusively egocentric observations, so that, for example, rotating the initial pose of the humanoid would not affect the policy's ability to execute the behavior. The full set of proprioceptive features we provided the policy consists of joint angles (q pos) and velocities (q vel), root-to-end-effector vectors (q ee), rootframe velocimeter (q veloc), rotational velocity (q gyro), root-frame accelerometers (q accel), and 3D orientation relative to the z-axis (r z : functionally a gravity sensor).Low-level controller reinforcement learning details Because the body is position-controlled, (a t has the same dimension and semantics as a subset of the body pose), we can pre-train the policy to produce target poses by supervised learning max θ t log π(q * pos,t+1 |s * t, t). This produces very poor control but facilitates the subsequent stage of RL-based imitation learning. We generally found that training with some pretraining considerably shortened the time the training took to converge and improved the ing policies. For RL, we performed off-policy training using a distributed actor-critic implementation, closest to that used in BID10. This implementation used a replay buffer and target networks as done in previous work. The Q-function was learned off-policy using TD-learning using importance-weighted Retrace BID30, and the actor was learned off-policy using SVG. This is to say that we learned the policy by taking gradients with respect to the Q function (target networks were updated every 500 learning steps). Gradient updates to the policy were performed using short time windows, {s τ, a τ} τ =1...T, sampled from replay: DISPLAYFORM3 where η was fixed in our experiments. While the general details of the RL algorithm are not pertinent to the success of this approach (e.g. Peng et al. FORMULA0 used on-policy RL), we found two details to be critical, and both were consistent with the reported in BID34. Policy updates needed to be performed conservatively with the update including a term which restricts BID35. Secondly, we found that attempting to learn the variance of the policy actions tended to in premature convergence, so best were obtained using a stochastic policy with fixed noise (we used noise with σ = .1). DISPLAYFORM4 We next consider how to design low-level motor controllers derived from motion capture trajectories. Broadly, existing approaches fall into two categories: structured and cold-switching controllers. In structured controllers, there is a hand-designed relationship between "skill-selection" variables and the generated behavior. Recent work by BID34 explored specific handdesigned, structured controllers. While parameterized skill-selection coupled with manual curation and preprocessing of motion capture data can produce artistically satisfying , the range of behavior has been limited and implementation requires considerable expertise and animation skill. By contrast, an approach in which behaviors are combined by a more automatic procedure promises to ultimately scale to a wider range of behaviors. Below, we describe some specific choices for both structured and cold-switching controllers. For structured control schemes, we consider: a steerable controller that produces running behavior with a controllable turning radius, and a switching controller that is a single policy that can switch between the behaviors learned from multiple mocap clips, with switch points allowed at the end of gait cycles. The allowed transitions were defined by a transition graph. For cold switching, we will not explicitly train transitions between behaviors. Steerable controller Following up on the ability to track a single cyclical behavior like locomotion described above, we can introduce the ability to parametrically turn. To do this we distorted the reference trajectory accordingly and trained the policy to track the reference with the turning radius as additional input. Each gait cycle we picked a random turning radius parameter and in that gaitcyle we rotate the reference clip heading (q ori) at that constant rate (with appropriate bookkeeping for other positions and velocities). The was a policy that, using only one gait cycle clip as input, could turn with a specified rate of turning. Switching controller An alternative to a single behavior with a single continuously controllable parameter is a single policy that is capable of switching among a discrete set of behaviors based on a 1-of-k input. Training consisted of randomly starting in a pose sampled from a random mocap clip and transitioning among clips according to a graph of permitted transitions. Given a small, discrete set of clips that were manually "cut" to begin and end at similar points in a gait cycle, we initialized a discrete Markov process among clips with some initial distribution over clips and transitioned between clips that were compatible (walk forward to turn left, etc.) FIG1 ). Cold-switching of behaviors and control fragments We can also leave the task of sequencing behaviors to the high-level controller, instead of building structured low-level policies with explicit, designed transitions. Here, we did not attempt to combine the clips into a single policy; instead, we cut behaviors into short micro-behaviors of roughly 0.1 to 0.3 seconds, which we refer to as control fragments BID24. Compared to switching using the complete behaviors, the micro-behaviors, or control fragments, allow for better transitions and more flexible locomotion. Additionally, we can easily scale to many clips without manual intervention. For example, clip 1 would generate a list of fragments: DISPLAYFORM0 10 (a|s t, τ). When fragment 1 was chosen, τ the time-indexing variable was set to τ = 0 initially and ticked until, say, τ = 0.1. Choosing fragment 2, π 1 2, would likewise send a signal to the clip 1 policy starting from τ = 0.1, etc. Whereas we have to specify a small set of consistent behaviors for the other lowlevel controller models, we could easily construct hundreds (or possibly more) control fragments cheaply and without significant curatorial attention. Since the control fragments were not trained with switching behavior, we refer to the random access switching among fragments by the highlevel controller as "cold-switching" (Fig. 3).Figure 3: Cold-switching among a set of behaviors (A) only at end of clips to form a trajectory composed of sequentially activation of the policies (B). Alternatively, policies are fragmented at a pre-specified set of times, cutting the policy into sub-policies (C), which serve as control fragments, enabling sequencing at a higher frequency (D). We integrated the low-level controllers into an agent architecture with vision and and an LSTM memory in order to apply it to tasks including directed movements to target locations, a running course with wall or gap obstacles, a foraging task for "balls", and a simple memory task involving detecting and memorizing the reward value of the balls. The interface between the high-level controller and the low-level depends on the type of low-level controller: for the steerable controller, the high-level produces a one-dimensional output; for the switching and control fragment controllers, the high-level produces a 1-of-K index to select the lowlevel policies. The high-level policies are trained off-policy using data from a replay buffer. The replay buffer contains data generated from distributed actors, and in general the learner processes the same replay data multiple times. The high-level controller senses inputs from proprioceptive data and, for visual tasks, an egocentric camera mounted at the root of the body FIG2. A noteworthy challenge arises due to the movement of the camera itself during locomotion. The proprioceptive inputs are encoded by a single linear layer, and the image is encoded by a ResNet (see Appendix A). The separate inputs streams are then flattened, concatenated, and passed to an LSTM, enabling temporally integrated decisions, with a stochastic policy and a value function head. The high-level controller receives inputs at each time step even though it may only act when the previous behavior (gait cycle or control fragment) has terminated. Importantly, while the low-level skills used exclusively egocentric proprioceptive input, the highlevel controller used vision to select from or modulate them, enabling the system as a whole to effect visuomotor computations. High-level controller reinforcement learning details For the steerable controller, the policy was a parameterized Gaussian distribution that produces the steering angle a s ∈ [−1.5, 1.5]. The mean of Gaussian was constrained via a tanh and sampled actions were clipped to the steering angle range. The steering angle was held constant for a full gait cycle. The policy was trained as previously described by learning a state-action value function off-policy using TD-learning with Retrace BID30 with the policy trained using SVG.For the switching controller and the discrete control fragments approach, the policy was a multinomial over the discrete set of behaviors. In either case, the high-level controller would trigger the behavior for its period T (a gait cycle or a fragment length). To train these discrete controllers, we fit the state-value baseline V -function using V-Trace and update the policy according to the method in BID4. While we provided a target for the value function loss at each time step, Figure 5: A. Go-to-target: in this task, the agent moves on an open plane to a target provided in egocentric coordinates. B. Walls: The agent runs forward while avoiding solid walls using vision. C. Gaps: The agent runs forward and must jump between platforms to advance. D. Forage: Using vision, the agent roams in a procedurally-generated maze to collect balls, which provide sparse rewards. E. Heterogeneous Forage: The agent must probe and remember rewards that are randomly assigned to the balls in each episode.the policy gradient loss for the high-level was non-zero only when a new action was sampled (every T steps).Query-based control fragment selection We considered an alternative family of ideas to interface with control fragments based on producing a Gaussian policy search query to be compared against a feature-key for each control fragment. We then selected the control fragment whose key was nearest the query-action. Our method was based on the Wolpertinger approach introduced in BID3. Here, the Q-function was evaluated for each of k nearest neighbors to the query-action, and the control fragment were selected with Boltzmann exploration, i.e. p(a DISPLAYFORM0, where h is the output of the LSTM. See Appendix A.3.3 for more details. The intuition was that this would allow the high-level policy to be less precise as the Qfunction could assist it in selecting good actions. However, this approach under-performed relative to discrete action selection as we show in our . We compared the various approaches on a variety of tasks implemented in MuJoCo BID40 . The core tasks we considered for the main comparisons were Go-to-target, wall navigation (Walls), running on gapped platforms (Gaps), foraging for colored ball rewards (Forage), and a foraging task requiring the agent to remember the reward value of the different colored balls (Heterogeneous Forage) (see Fig. 5). In Go-to-target, the agent received a sparse reward of 1 for each time step it was within a proximity radius of the target. For Walls and Gaps, adapted from to operate from vision, the agent received a reward proportional to its forward velocity. Forage was broadly similar to explore object locations in the DeepMind Lab task suite BID0 (with a humanoid body) while Heterogeneous Forage was a simplified version of explore object rewards. In all tasks, the body was initialized to a random pose from a subset of the reference motion capture data. For all tasks, other than Go-to-target, the high-level agent received a 64x64 image from the camera attached to the root of the body, in addition to the proprioceptive information. We compared the agents on our core set of tasks. Our overall best were achieved using control fragments with discrete selection (Fig. 6). Additional training details are provided in Appendix A. For comparison, we also include the control experiment of training a policy to control the humanoid from scratch (without low-level controllers) as well as training a simple rolling ball body. The Figure 6: Performance of various approaches on each core task. Of the approaches we compared, discrete switching among control fragments performed the best. Plots show the mean and standard error over multiple runs.performance of the rolling ball is not directly comparable because its velocity differs from that of the humanoid, but isolates the task complexity from the challenge of motor control of the humanoid body. The switching controllers selected between a base set of four policies: stand, run, left and right turn. For the control fragments approach we were able to augment this set as described in TAB2.The end-to-end approach (described in Appendix A.4) succeeded at only Go-to-target, however the ing visual appearance was jarring. In the more complex Forage task, the end-to-end approach failed entirely. The steering controller was also able to perform the Go-to-target task, but a fixed turning radius meant that it was unable to make a direct approach the target, ing in a long travel time to the target and lower score. Both the steering controller and switching controller were able to reach the end of the course in the Walls task, but only the control fragments approach allowed for sharper turns and quicker adjustments for agent to achieve a higher velocity. Generally, the switching controller with transitions started to learn faster and appeared the most graceful because of its predefined, smooth transitions, but its comparative lack of flexibility meant that its asymptotic task performance was relatively low. In the Forage task, where a score of > 150 means the agent is able to move around the maze and 600 is maximum collection of reward, the switching controller with transitions was able to traverse the maze but unable to adjust to the layout of the maze to make sharper turns to collect all objects. The control fragments approach was able to construct rotations and abrupt turns to collect the objects in each room. In the Gaps task, we were able to use the control fragments approach with 12 single-clip policies, where it would be laborious to pretrain transitions for each of these. In this task, the high-level controller selected between the 4 original stand, run and turn policies as well as 8 additional jumps, ing in 359 fragments, and was able to synthesize them to move forward along the separated platforms. In the final Heterogeneous Forage task, we confirmed that the agent, equipped with an LSTM in the high-level controller, was capable of memory-dependent control behavior. See our Extended Video 2 for a comprehensive presentation of the controllers. All control fragment comparisons above used control fragments of 3 time steps (0.09s). To further understand the performance of the control fragment approach, we did a more exhaustive comparison of performance on Go-to-target of the effect of fragment length, number of fragments, as well as Figure 7: Example agent-view frames and corresponding visuomotor salience visualizations. Note that the ball is more sharply emphasized, suggesting the selected actions were influenced by the affordance of tacking toward the ball.introduction of redundant clips (see appendix B). We saw benefits in early exploration due to using fragments for more than one time step but lower ultimate performance. Adding more fragments was helpful when those fragments were functionally similar to the standard set and the high-level controller was able to robustly handle those that involved extraneous movements unrelated to locomotion. While the query-based approaches did not outperform the discrete control fragment selection (Fig. 6), we include a representative visualization in Appendix A.3 to help clarify why this approach may not have worked well. In the present setting, it appears that the proposal distribution over queries generated by the high-level policy was high variance and did not learn to index the fragments precisely. On Forage, the high-level controller with discrete selection of control fragments generated structured transitions between fragments (Appendix C). Largely, movements remained within clip or behavior type. The high-level controller ignored some fragments involving transitions from standing to running and left-right turns to use fast-walk-and-turn movements. To assess the visual features that drove movements, we computed saliency maps BID37 showing the intensity of the gradient of the selected action's log-probability with respect to each pixel: DISPLAYFORM0 c |∇ Ix,y,c log π(a HL t |h t)|) with normalization Z and clipping g (Fig. 7). Consistently, action selection was sensitive to the borders of the balls as well as to the walls. The visual features that this analysis identifies correspond roughly to sensorimotor affordances BID8; the agent's perceptual representations were shaped by goals and action. In this work we explored the problem of learning to reuse motor skills to solve whole body humanoid tasks from egocentric camera observations. We compared a range of approaches for reusing lowlevel motor skills that were obtained from motion capture data, including variations related to those presented in BID24 BID34. To date, there is limited learning-based work on humanoids in simulation reusing motor skills to solve new tasks, and much of what does exist is in the animation literature. A technical contribution of the present work was to move past hand-designed observation features (as used in BID34) towards a more ecological observation setting: using a front-facing camera is more similar to the kinds of observations a real-world, embodied agent would have. We also show that hierarchical motor skill reuse allowed us to solve tasks that we could not with a flat policy. For the walls and go-to-target tasks, learning from scratch was slower and produced less robust behavior. For the forage tasks, learning from scratch failed completely. Finally, the heterogeneous forage is an example of task that integrates memory and perception. There are some other very clear continuities between what we present here and previous work. For learning low-level tracking policies from motion capture data, we employed a manually specified similarity measure against motion capture reference trajectories, consistent with previous work BID26 BID34. Additionally, the low-level policies were time-indexed: they operated over only a certain temporal duration and received time or phase as input. Considerably less research has focused on learning imitation policies either without a pre-specified scoring function or without time-indexing (but see e.g.). Compared to previous work using control fragments BID24, our low-level controllers were built without a sampling-based planner and were parameterized as neural networks rather than linear-feedback policies. We also want to make clear that the graph-transition and steerable structured low-level control approaches require significant manual curation and design: motion capture clips must be segmented by hand, possibly manipulated by blending/smoothing clips from the end of one clip to the beginning of another. This labor intensive process requires considerable skill as an animator; in some sense this almost treats humanoid control as a computer-aided animation problem, whereas we aim to treat humanoid motor control as an automated and data-driven machine learning problem. We acknowledge that relative to previous work aimed at graphics and animation, our controllers are less graceful. Each approach involving motion capture data can suffer from distinct artifacts, especially without detailed manual editing -the hand-designed controllers have artifacts at transitions due to imprecise kinematic blending but are smooth within a behavior, whereas the control fragments have a lesser but consistent level of jitter throughout due to frequent switching. Methods to automatically (i.e. without human labor) reduce movement artifacts when dealing with large movement repertoires would be interesting to pursue. Moreover, we wish to emphasize that due to the human-intensive components of training structured low-level controllers, fully objective algorithm comparison with previous work can be somewhat difficult. This will remain an issue so long as human editing is a significant component of the dominant solutions. Here, we focused on building movement behaviors with minimal curation, at scale, that can be recruited to solve tasks. Specifically, we presented two methods that do not require curation and can re-use low-level skills with cold-switching. Additionally, these methods can scale to a large number of different behaviors without further intervention. We view this work as an important step toward the flexible use of motor skills in an integrated visuomotor agent that is able to cope with tasks that pose simultaneous perceptual, memory, and motor challenges to the agent. Future work will necessarily involve refining the naturalness of the motor skills to enable more general environment interactions and to subserve more complicated, compositional tasks. A , all training was done using a distributed actor-learner architecture. Many asynchronous actors interact with the environment to produce trajectories of (s t, a t, r t, s t+1) tuples of a fixed rollout length, N. In contrast to BID4, each trajectory was stored in a replay buffer. The learner sampled trajectories of length N at random and performed updates. Each actor retrieved parameters from the learner at a fixed time interval. The learner ran on a single Pascal 100 or Volta 100 GPU. The plots presented use the steps processed by the learner on the x-axis. This is the number of transition retrieved from the replay buffer, which is equivalent to the number of gradient updates x batch size x rollout length. We performed all optimization with Adam and used hyperparameter sweeps to select learning rates and batch sizes. For the switching controller and control fragments approach we used a standard set of four policies trained from motion capture which imitated stand, run, left and right turn behaviors. In the switching controller, pretrained transitions were created in the reference data. For the control fragments approach, we were able to augment the set without any additional work and the selected policies are described in TAB2 In the heterogeneous forage task, the humanoid is spawned in a room with 6 balls, 3 colored red and 3 colored green. Each episode, one color is selected at random and assigned a positive value (+30) or a negative value (-10) and the agent must sample a ball and then only collect the positive ones. The architecture of the high-level controller consisted of proprioceptive encoder and an optional image encoder which, along with prior reward and action, were passed to an LSTM. This encoding core was shared with both the actor and critic. The details of the encoder are depicted in Fig. A. 1. The policy was trained with a SVG update. A state-action value / Q function was implemented as an MLP with dimensions in Table 1 and trained with a Retrace target. Target networks were used for Q and updated every 100 training iterations. The policy was also updated by an additional entropy cost at each time step, which was added to the policy update with a weight of 1e −5. The switching controller policy head took as input the outputs of the LSTM in Fig. A. 1. The policy head was an LSTM, with a state size of 128, followed by a linear layer to produce the logits of the multinomial distribution. The policy was updated with a policy gradient using N -step empirical returns with bootstrapping to compute an advantage, where N was equivalent to the rollout length in Table 1. The value-function (trained via V-Trace) was used as a baseline. The policy was also updated by an additional entropy cost at each time step, which was added to the policy update with a weight of.01. We train a policy to produce a continuous feature vector (i.e. the query-action), so the selector is parameterized by a diagonal multivariate Gaussian action model. The semantics of the query-action will correspond to the features in the control fragment feature-key vectors, which were partial state observations (velocity, orientation, and end-effector relative positions) of the control fragment's nominal start or end pose. Figure A.2: Illustration of query-based control fragment selection in which a query feature vector is produced, compared with key feature vectors for all control fragments, and the Q-value of selecting each control fragment in the current state is used to determine which control fragment is executed. In this approach, the Q function was trained with 1 step returns. So, for samples (s t, a t, r t, s t+1) from replay: DISPLAYFORM0 The total loss is summed across minibatches sampled from replay. Note that for the query-based approach we have query actions which are in the same space as but generally distinct from the reference keys of the control fragments which are selected after the sampling procedure. The high-level policy emits query actions, which are rectified to reference keys by a nearest lookup (i.e. the selected actions). This leads to two, slightly different high-level actions in the same space. This leads to the question of what is the appropriate action on which to perform both policy updates and value function updates. To handle this, it proved most stable to compute targets and loss terms for both the query-actions and selected actions for each state from replay. In this way, a single Q function represented the value of query-actions and selected actions. This was technically an approximation as the training of the Q-function pools these two kinds of action input. Finally, the policy update used the SVG-style update: DISPLAYFORM1 Figure A.3: Visualization (using PCA) of the actions produced by the trained query-based policy (on go-to-target). Query actions are the continuous actions generated by the policy. Reference keys are the feature vectors associated with the control fragments (here, features of the final state of the nominal trajectory of the fragment). Selected actions are the actions produced by the sampling mechanism and they are overlain to emphasize the control reference keys actually selected. The most prominent feature of this visualization is the lack of precision in the query actions. Note that 45/105 control fragments were selected by the trained policy. We hypothesize that the limited success of this approach is perhaps partly due to the impreciseness in their selectivity (see Fig A. 3). After finding these approaches were not working very well, an additional analysis of a trained discrete-selection agent, not shown here, found that the second most preferred control fragment (in a given state) was not usually the control fragment with the most similar reference key. This implies that the premise of the query-based approach, namely that similar fragments should be preferentially confused/explored may not be as well-justified in this case as we speculated, when we initially conceived of trying it. That analysis notwithstanding, we remain optimistic this approach may end up being more useful than we found it to be here. End-to-end training on each task was performed in a similar fashion to training the low-level controllers. Using the same architecture as in A.1, the policy was trained to output the 56-dimensional action for the position-controlled humanoid. As in the low-level training, the policy was trained with a SVG update and Q was trained with a Retrace target. The episode was terminated when the humanoid was in an irrecoverable state and when the head fell below a fixed height. We compared performance as a function of the number of fragments as a well as the length of fragments. If using a fixed number of behaviors and cutting them into control fragments of various lengths, two features are coupled: the length of the fragments vs. how many fragments there are. One can imagine a trade-off -more fragments might make exploration harder, but shorter temporal commitment to a fragment may ultimately lead to more precise control. To partially decouple the number of fragments from their length, we also compared performance with functionally redundant but larger sets of control fragments. Ultimately, it appears that from a strict task-performance perspective, shorter control fragments tend to perform best as they allow greatest responsiveness. That being said, the visual appearance of the behavior tends to be smoother for longer control fragments. Control fragments of length 3 (.09 sec) seemed to trade-off behavioral coherence against performance favorably. Hypothetically, longer control fragments might also shape the action-space and exploration distribution favorably. We see a suggestion of this with longer-fragment curves ascending earlier. Figure A.5: Transition density between control fragments for a trained agent on Forage. The colors reflect density of transitions within a clip/behavior class (darker is denser), and single fragment transition densities are overlain as circles where size indicates the density of that particular transition. Figure A.6: We depict a timeseries of the behavior of a trained high-level policy on Forage. In this particular trained agent, it frequently transitions to a particular stand fragment which it has learned to rely on. Green vertical lines depict reward acquisition. | Solve tasks involving vision-guided humanoid locomotion, reusing locomotion behavior from motion capture data. | 661 | scitldr |
The gap between the empirical success of deep learning and the lack of strong theoretical guarantees calls for studying simpler models. By observing that a ReLU neuron is a product of a linear function with a gate (the latter determines whether the neuron is active or not), where both share a jointly trained weight vector, we propose to decouple the two. We introduce GaLU networks — networks in which each neuron is a product of a Linear Unit, defined by a weight vector which is being trained, with a Gate, defined by a different weight vector which is not being trained. Generally speaking, given a base model and a simpler version of it, the two parameters that determine the quality of the simpler version are whether its practical performance is close enough to the base model and whether it is easier to analyze it theoretically. We show that GaLU networks perform similarly to ReLU networks on standard datasets and we initiate a study of their theoretical properties, demonstrating that they are indeed easier to analyze. We believe that further research of GaLU networks may be fruitful for the development of a theory of deep learning. An artificial neuron with the ReLU activation function is the function f w (x): R d → R such that f w (x) = max{x w, 0} = 1 x w≥0 · x w.The latter formulation demonstrates that the parameter vector w has a dual role; it acts both as a filter or a gate that decides if the neuron is active or not, and as linear weights that control the value of the neuron if it is active. We introduce an alternative neuron, called Gated Linear Unit or GaLU for short, which decouples between those roles. A 0 − 1 GaLU neuron is a function g w,u (x): R d → R such that g w,u (x) = 1 x u≥0 · x w. GaLU neurons, and therefore GaLU networks, are at least as expressive as their ReLU counterparts, since f w = g w,w. On the other hand, GaLU networks appear problematic from an optimization perspective, because the parameter u cannot be trained using gradient based optimization (since ∇ u g w,u (x) is always zero). In other words, training GaLU networks with gradient based algorithms is equivalent to initializing the vector u and keeping it constant thereafter. A more general definition of a GaLU network is given in section 2.The main claim of the paper is that GaLU networks are on one hand as effective as ReLU networks on real world datasets (section 3) while on the other hand they are easier to analyze and understand (section 4). Many recent works attempt to understand deep learning by considering simpler models, that would allow theoretical analysis while preserving some of the properties of networks of practical utility. Our model is most closely related to two such proposals: linear networks and non-linear networks in which only the readout layer is being trained. Deep linear networks is a popular model for analysis that lead to impressive theoretical (e.g. BID14 ; BID4 ; BID7). Linear networks are useful in order to understand how well gradient-based optimization algorithms work on non-convex problems. The weakness of linear network is that their expressive power is very limited: linear networks can only express linear functions. It means that their usefulness to understand the practical success of standard networks is somewhat limited. Training only the readout layer is an alternative attempt to understand deep learning through simpler models, that also gave theoretical interesting (e.g. BID13 BID8 ; BID2). The idea is that all the layers but the last one implement a non-linear constant transformation, and the last layer is learning a linear function on top of this transformation. The weakness of this model is that there is a big practical difference between training all the layers of a network and training only the last one. Our model is similar in certain aspects to both of those models, but it enjoys a much better practical utility than either one. See section 3 for an empirical comparison. Recall the definition of a basic GaLU neuron given in equation 1. We consider a more general GaLU neuron of the form g w,u,σ (x) = σ(x u) · x w for some non-linear scalar function σ: R → R. If σ is differentiable, we could train the vectors u with gradient based algorithms, but the focus of this paper is on untrained gates. That is, we assume that the vectors {u} are kept to their initial values throughout the optimization procedure and only the linear part of the GaLU neurons is being optimized. GaLU networks with a single hidden layer have the following property: for any given example, the values of the gates in the network remain constant. In networks with more than one hidden layer this not true. Consider a standard fully connected feed-forward network, let x be the input to the network and let x, x,... be the inputs to intermediate layers of the network. The output of a GaLU neuron at layer i will be σ(x (i−1) u) · x (i−1) w. So while the filter parameter vector, u,is not optimized upon, the value of the gate, σ(x (i−1) u), can change as x (i−1) changes. This adds an additional complication to the dynamics of the optimization that we wish to avoid. An alternative way to define a GaLU neuron at layer i is σ(x u) · x (i−1) w. In that case, the value of the gate is determined by the original input, and only the linear part depends on the output of the previous layer of the network. We call such a neuron a GaLU0 neuron, and a GaLU0 network is a network where all the neurons are GaLU0 neurons. In GaLU0 networks the gate values remain constant along the training, producing simpler dynamics. In order to check the hypothesis that effectiveness of ReLU networks stems mostly from the ability to train the linear part of the neurons, and not the gate part, we tested both GaLU0 1 and ReLU networks on the standard MNIST and Fashion-MNIST BID17 datasets. For both, we used PCA to reduce the input dimension to 64, and then trained a two hidden layers fully-conneted networks on them, with k hidden neurons at each hidden layer. FIG0 summarizes the , showing that GaLU0 and ReLU achieve similar , both outperforming linear networks of the same size. Training only the readout layer of a ReLU network gave much poorer (which were omitted from the graphs for clarity). All models were trained using the same architectures: two fully connected hidden layers with k neurons. The input dimension was reduced to 64 with PCA.Consider a GaLU network with a single hidden layer of k neurons: DISPLAYFORM0 A convenient property of a GaLU neuron is that it is linear in the weights w j, hence, α j g wj,uj (x) = g αj wj,uj (x). It means that the network can be rewritten as DISPLAYFORM1 withw j = α j w j. Because we want to optimize over the weights w 1,..., w k, α 1,..., α k, we might as well optimize over the reparameterizationw 1,...,w k without losing expressive power. It means that in a GaLU network of this form, it is sufficient to train the first layer of the network, as the readout layer adds nothing to the expressiveness of the network. The previous term can be further simplified: DISPLAYFORM2 So it turns out that a GaLU network is nothing more than a random non-linear transformation Φ u: R d → R kd and then a linear function. There are different notions for the expressivity of a model, and one of the simplest ones is the finitesample expressivity over a random sample. This notion fits well to our model, because we are not interested in the absolute expressivity of a GaLU network, but of the expressivity of a GaLU network with random filters. So the question is how well does a randomly-initialized network can fit a random sample. Note that given the constant filters, solving for the best weights is a convex problem. Hence, there is no "expressivity -optimization gap" in GaLU networks -every expressivity is immediately also an optimization . DISPLAYFORM0 and y 1,..., y m ∼ N, all of which are independent. Clearly, it is impossible to generalize from the sample to unseen examples; the best possible test loss is 1, and is achieved by the constant prediction 0. However, it is an interesting problem because it allows us to measure the expressivity of GaLU networks, by showing how much overfit we can expect from the network for a non-adversarial sample. Equivalently, it tells us how well the network can perform memorization tasks, where the only solution is to memorize the entire sample. We train the network for the standard mean-squareerror regression loss. Because the network is simply linear function over a constant non-linear transformation, and because we use the MSE loss, there is a closed form solution to the optimization problem DISPLAYFORM1 withX + being a pseudo-inverse ofX. This gives us DISPLAYFORM2 be arbitrary vectors. DefineX as above. Let y 1,..., y m ∼ N be independent random normal variables. Define the expected squared loss on the training set, for weights w, as L S (w). Then, DISPLAYFORM3 Proof Every vector y = (y 1, . . ., y m) ∈ R m can be decomposed to a sum y = a + b where a is in the span of the columns ofX and b is in the null space ofX. It follows that min w L S (w) = b 2 /m. The claim follows because if y ∼ N (0, I m) then the expected value of b 2 is m − rank X.It is always true that rank(X) ≤ min{m, kd}. Empirical experimentation shows that if x 1,..., x m, u 1,..., u k ∼ N (0, I d) then with high probability rank(X) = min{m, kd}. The fact that the GaLU network turned out to be only a linear function on top of a non-linear transformation seems to be a peculiar mathematical accident, with little relevance to standard networks. So we empirically tested the behavior of both ReLU and GaLU networks on the above model. It turns out that ReLU outperforms GaLU by a small margin -it is never better than GaLU with double the number of neurons, and is often worse than that. ReLU can outperform GaLU, even though it is less expressive, because we don't train the value of the the filters u 1,..., u k at all for the GaLU networks. It turns out that SGD over a ReLU network converges to better filters than a simple random initialization. One way to measure how much better those filters are is by trying to improve the initial filters of the GaLU network by randomly replacing them. Consider for example the simple algorithm given in algorithm 1.Running this algorithm improves the of the GaLU networks, making them more competitive with the ReLU ones. FIG1 summarizes our . An important fact about artificial neural networks is that they have small generalization error in many real-life problems. Otherwise they wouldn't be very useful as a learning algorithm. ZhangAlgorithm 1 Improve GaLU filters Input: A sample S, number of neurons k, number of iterations n. Initialize u 1, u 2,..., u k randomly. Find an optimal solution w 1,..., w k. for i = 1 to n do Pick j ∼ Uniform {1, 2, ..., k}. Pickũ j randomly. Find an optimal solution for a GaLU network with filters u 1,..., u j−1,ũ j, u j+1,..., u k. If the new solution is better than the current one, update u j =ũ j. end for One of the main experiments they run was to train the network over a sample with randomized labels, and to observe that the network still achieved small training loss (but large test loss, naturally). So any generalization bound that can be applied to the randomized sample is necessarily too weak to explain the generalization of the natural sample. As our goal is to show that GaLU networks exhibit similar phenomena as ReLU networks, but may be easier to analyze, we first construct a similar experiment to that of BID18 and compare the performance of GaLU and ReLU networks. Consider the following natural model. Let c 1,..., c n ∼ N (0, I d) be n clusters centers, each one with a random labels b 1,..., b n. A data point (x, y) is generated by picking a random index i ∼ Uniform {1, 2, . . ., n}, and setting x = c i + ξ for ξ ∼ N (0, σ We fixed the number of samples m = 1000, the input dimension d = 30, the number of clusters n = 30, the number of hidden neurons k = 30 and σ x = 0.1. We calculated the train and test errors for different values of σ y and p and for a GaLU and ReLU networks. The are summarized in FIG3 . We can clearly see that GaLU and ReLU have similar statistical behavior, and that while the train error is always small, as the labels become noisier the generalization error increases. This matches the spirit of experiments reported in BID18 . Next, we turn to an analysis of this phenomenon. Since one hidden layer GaLU networks can be cast as linear predictors, we can rely on classic norm-based generalization bounds for linear predictors. In particular, for p ∈ {1, 2}, consider the class of linear predictors H p = {x → x w : w p ≤ B p}. BID15. This also induces an upper bound on the gap between the test and train loss (see again BID15 for Lipschitz loss functions and see BID16 for the relation between Rademacher complexity and the generalization of smooth losses such as the squared loss). The question is whether the 1 / 2 norm of w is correlated with the amount of noise in the data. To study this, we depict the gap between train and test error as a function of the norm of w for GaLU networks. As can be seen in FIG5, for both the 1 and 2 norm, there is a clear linear relation between w 2 p and the generalization gap. While the constants are far from what the bounds state, the linear correlation is very clear. Note that figure 4 deals with GaLU networks that were trained as linear functions (by using the closed form solution for the MSE loss), and indeed shows that such network with such training behave as the theory states for linear predictors. We do not get the same behavior when we (unnecessarily) train both layers of the network using SGD. This matches the discussion in Section 5 of BID18, where the correlation between the 2 norm of the weights in a ReLU network and the test loss is discussed, and it is argued that there are more factors that affect the generalization properties. Indeed, many followup works show different capacity measures that may be more adequate for studying the generalization of deep learning (See for example). We next show a rather different analysis for a particular instance of linear regression. Consider a simple linear regression using the MSE, and denote the train and test loss by DISPLAYFORM0 Given a training set S, the MSE estimator is defined as w(S):= arg min w L S (w).We start with the following lemma. Lemma 1 (Follows from Corollary 2 of BID12) For a scalar σ ≥ 0 and a vector β ∈ R d, let D σ,β be the distribution over R d × R which is defined by the following generative DISPLAYFORM1 This lemma provides a complete analysis for the following experiment, which is similar to the experiments reported by BID18. We compare two distributions, the first is D σ,β for some vector β ∈ R d and for σ being close to 0, and the second is D 1,0. Note that the first distribution corresponds to a case in which we would like to be able to generalize, while the second distribution corresponds to a case in which we are fitting random noise and do not expect to generalize. We set the training set size to be m = d + 2 and we analyze the MSE estimator, w(S). As the lemma shows, the expected training losses on the first and second distributions are DISPLAYFORM2 respectively. Hence, the training loss should be small on both of the distributions. In contrast, the expected test loss on the first distribution is DISPLAYFORM3 while the expected test loss on the second distribution is DISPLAYFORM4 We see that while the train loss can be small on both distributions, in the test loss we see a big gap between the first distribution (assuming σ 1/ √ d) and the second distribution of purely random labels. This is exactly the type of phenomenon reported in BID18 -a sample with a small amount of noise achieves both small train and test losses, but a sample with random labels achieves a small train loss but a large test loss. Note that this is a natural property of the least squares solution, without any explicit regularization, picking a minimal-norm solution or using a specific algorithm for solving the problem. Lemma 1 gives us a very sharp analysis of linear regression. Unfortunately, the assumptions of Lemma 1 (which are based on the assumptions of Corollary 2 in BID12) are too strong -we need that m > d + 1 and that the instances will be generated based on a Gaussian distribution. While BID12 also includes asymptotic that are applicable for a larger set of distributions, we leave the application of them to GaLU networks for future work. In the analysis of the R d → R case we used the fact that a GaLU neuron g w,u is linear in the parameter w, and it allowed us to rephrase the problem as a convex problem. In the R d → R d case the situation is not as simple. In this case, every hidden neuron has d outgoing edges, and so we cannot use the same reparametrization trick as before. Even so, the output of a GaLU neuron is still linear in the parameter w. It means that for convex loss functions, finding the optimal weights for the first layer, keeping the weights of the second one constant, is a convex problem. The same doesn't hold for ReLU networks. Finding the optimal weights for the second layer, keeping the weights of the first one constant, is also a convex problem. Even more specifically, the optimization problem over the two layers is biconvex (see BID3 for a survey). So instead of applying SGD, we can apply biconvex optimization algorithms, such as Alternate Convex Search (ACS). In the case of the MSE loss, there is a closed form solution for each step of ACS, and using it outperforms SGD for small enough samples 2. Even though it is of limited practical use, this algorithm might be interesting for the derivation of theoretical bounds for such networks. In addition, it turns out that as we increase the output dimension d, GaLU and ReLU networks becomes more similar. In section 4.1.1 we measured the difference between ReLU and GaLU for the problem where all the variables are i.i.d. N, and it turned out that ReLU outperforms GaLU to a small extent. We repeated this experiment with larger d, and saw that the difference between the two vanished quickly (see FIG6). ber of neurons k such that a one hidden layer network achieves MSE< 0.3 on the random regression problem. As the output dimension d grows, more neurons are needed. As demonstrated in figure 2, GaLU networks needs more neurons than ReLU networks for output dimension d = 1. For larger d GaLU is slightly better, but it is clear that the two networks exhibit very similar behavior. We used fixed sample size (m = 1024) and input dimension (d = 32) in the generation of this graph. The standard paradigm in deep learning is to use neurons of the form σ x w for some differentiable non linear function σ: R → R. In this article we proposed a different kind of neurons, σ i,j · x w, where σ i,j is some function of the example and the neuron index that remains constant along the training. Those networks achieve similar to those of their standard counterparts, and they are easier to analyze and understand. To the extent that our arguments are convincing, it gives new directions for further research. Better understanding of the one hidden layer case (from section 5) seems feasible. And as GaLU and ReLU networks behave identically for this problem, it gives us reasons to hope that understanding the behavior of GaLU networks would also explain ReLU networks and maybe other non-linearities as well. As for deeper network, it is also not beyond hope that GaLU0 networks would allow some better theoretical analysis than what we have so far. | We propose Gated Linear Unit networks — a model that performs similarly to ReLU networks on real data while being much easier to analyze theoretically. | 662 | scitldr |
Machine learning systems often encounter Out-of-Distribution (OoD) errors when dealing with testing data coming from a different distribution from the one used for training. With their growing use in critical applications, it becomes important to develop systems that are able to accurately quantify its predictive uncertainty and screen out these anomalous inputs. However, unlike standard learning tasks, there is currently no well established guiding principle for designing architectures that can accurately quantify uncertainty. Moreover, commonly used OoD detection approaches are prone to errors and even sometimes assign higher likelihoods to OoD samples. To address these problems, we first seek to identify guiding principles for designing uncertainty-aware architectures, by proposing Neural Architecture Distribution Search (NADS). Unlike standard neural architecture search methods which seek for a single best performing architecture, NADS searches for a distribution of architectures that perform well on a given task, allowing us to identify building blocks common among all uncertainty aware architectures. With this formulation, we are able to optimize a stochastic outlier detection objective and construct an ensemble of models to perform OoD detection. We perform multiple OoD detection experiments and observe that our NADS performs favorably compared to state-of-the-art OoD detection methods. Detecting anomalous data is crucial for safely applying machine learning in autonomous systems for critical applications and for AI safety . Such anomalous data can come in settings such as in autonomous driving ), disease monitoring , and fault detection (b). In these situations, it is important for these systems to reliably detect abnormal inputs so that their occurrence can be overseen by a human, or the system can proceed using a more conservative policy. The widespread use of deep learning models within these autonomous systems have aggravated this issue. Despite having high performance in many predictive tasks, deep networks tend to give high confidence predictions on Out-of-Distribution (OoD) data . Moreover, commonly used OoD detection approaches are prone to errors and even assign higher likelihoods to samples from other datasets . Unlike common machine learning tasks such as image classification, segmentation, and speech recognition, there are currently no well established guidelines for designing architectures that can accurately screen out OoD data and quantify its uncertainty. Such a gap in our knowledge makes Neural Architecture Search (NAS) a promising option to explore the better design of uncertaintyaware models . NAS algorithms attempt to find an optimal neural network architecture for a specific task. Existing efforts have primarily focused on searching for architectures that perform well on image classification or segmentation. However, it is unclear whether architecture components that are beneficial for image classification and segmentation models would also lead to better uncertainty quantification and thereafter be effective for OoD detection. Moreover, previous work on deep uncertainty quantification shows that ensembles can help calibrate OoD classifier based methods, as well as improve OoD detection performance of likelihood estimation models . Because of this, instead of a single best performing architecture for uncertainty awareness, one might consider a distribution of wellperforming architectures. Along this direction, designing an optimization objective which leads to uncertainty-aware models is also not straightforward. With no access to labels, unsupervised/self-supervised generative models which maximize the likelihood of in-distribution data become the primary tools for uncertainty quantification (a). However, these models counter-intuitively assign high likelihoods to OoD data (a; ; a; Shafaei et al.). Because of this, maximizing the log-likelihood is inadequate for OoD detection. On the other hand, proposed using the Widely Applicable Information Criterion (WAIC) , a penalized log-likelihood score, as the OoD detection criterion. However, the score was approximated using an ensemble of models that was trained on maximizing the likelihood and did not directly optimize the WAIC score. To this end, we propose a novel Neural Architecture Distribution Search (NADS) framework to identify common building blocks that naturally incorporate model uncertainty quantification and compose good OoD detection models. NADS is an architecture search method designed to search for a distribution of well-performing architectures, instead of a single best architecture by formulating the architecture search problem as a stochastic optimization problem. Using NADS, we optimize the WAIC score of the architecture distribution, a score that was shown to be robust towards model uncertainty. Such an optimization problem with a stochastic objective over a probability distribution of architectures is unamenable to traditional NAS optimization strategies. We make this optimization problem tractable by taking advantage of weight sharing between different architectures, as well as through a parameterization of the architecture distribution, which allows for a continuous relaxation of the discrete search problem. Using the learned posterior architecture distribution, we construct a Bayesian ensemble of deep models to perform OoD detection. Finally, we perform multiple OoD detection experiments to show the efficacy of our proposed method. Neural Architecture Search (NAS) algorithms aim to automatically discover an optimal neural network architecture instead of using a hand-crafted one for a specific task. Previous work on NAS has achieved successes in image classification , image segmentation , object detection , structured prediction , and generative adversarial networks . However, there has been no NAS algorithm developed for uncertainty quantificaton and OoD detection. NAS consists of three components: the proxy task, the search space, and the optimization algorithm. Prior work in specifying the search space either searches for an entire architecture directly, or searches for small cells and arrange them in a pre-defined way. Optimization algorithms that have been used for NAS include reinforcement learning (; ;), Bayesian optimization , random search , Monte Carlo tree search , and gradient-based optimization methods (b;). To efficiently evaluate the performance of discovered architectures and guide the search, the design of the proxy task is critical. Existing proxy tasks include leveraging shared parameters , predicting performance using a surrogate model (a), and early stopping ). To our best knowledge, all existing NAS algorithms seek a single best performing architecture. In comparison, searching for a distribution of architectures allows us to analyze the common building blocks that all of the candidate architectures have. Moreover, this technique can also complement ensemble methods by creating a more diverse set of models for the ensemble decision, an important ingredient for deep uncertainty quantification . Prior work on uncertainty quantification and OoD detection for deep models can be divided into model-dependent (; ;), and model-independent techniques (; ;). Model-dependent techniques aim to yield confidence measures p(y|x) for a model's prediction y when given input data x. However, a limitation of model-dependent OoD detection is that they may discard information regarding the data distribution p(x) when learning the task specific model p(y|x). This could happen when certain features of the data are irrelevant for the predictive task, causing information loss regarding the data distribution p(x). Moreover, existing methods to calibrate model uncertainty estimates assume access to OoD data during training (; b). Although the OoD data may not come from the testing distribution, this assumes that the structure of OoD data is known ahead of time, which can be incorrect in settings such as active/online learning where new training distributions are regularly encountered. On the other hand, model-independent techniques seek to estimate the likelihood of the data distribution p(x). These techniques include Variational Autoencoders (VAEs) , generative adversarial networks (GANs) , autoregressive models , and invertible flow-based models . Among these techniques, invertible models offer exact computation of the data likelihood, making them attractive for likelihood estimation. Moreover, they do not require OoD samples during training, making them applicable to any OoD detection scenario. Thus in this paper, we focus on searching for invertible flow-based architectures, though the presented techniques are also applicable to other likelihood estimation models. Along this direction, recent work has discovered that likelihood-based models can assign higher likelihoods to OoD data compared to in-distribution data (a;) (see Figure 13 for an example). One hypothesis for such a phenomenon is that most data points lie within the typical set of a distribution, instead of the region of high likelihood (b). Thus, Nalisnick et al. (2019b) recommend to estimate the entropy using multiple data samples to screen out OoD data instead of using the likelihood. Other uncertainty quantification formulations can also be related to entropy estimation . However, it is not always realistic to test multiple data points in practical data streams, as testing data often come one sample at a time and are never well-organized into in-distribution or out-of-distribution groups. With this in mind, model ensembling becomes a natural consideration to formulate entropy estimation. Instead of averaging the entropy over multiple data points, model ensembles produce multiple estimates of the data likelihood, thus "augmenting" one data point into as many data points as needed to reliably estimate the entropy. However, care must be taken to ensure that the model ensemble produces likelihood estimates that agree with one another on in-distribution data, while also being diverse enough to discriminate OoD data likelihoods. In what follows, we propose NADS as a method that can identify distributions of architectures for uncertainty quantification. Using a loss function that accounts for the diversity of architectures within the distribution, NADS allows us to construct an ensemble of models that can reliably detect OoD data. Putting Neural Architecture Distribution Search (NADS) under a common NAS framework , we break down our search formulation into three main components: the proxy task, the search space, and the optimization method. Specifying these components for NADS with the ultimate goal of uncertainty quantification for OoD detection is not immediately obvious. For example, naively using data likelihood maximization as a proxy task would run into the issue pointed out by Nalisnick et al. (2019a), with models assigning higher likelihoods to OoD data. On the other hand, the search space needs to be large enough to include a diverse range of architectures, yet still allowing a search algorithm to traverse it efficiently. In the following sections, we motivate our decision on these three choices and describe these components for NADS in detail. The first component of NADS is the training objective that guides the neural architecture search. Different from existing NAS methods, our aim is to derive an ensemble of deep models to improve model uncertainty quantification and OoD detection. To this end, instead of searching for architectures which maximize the likelihood of in-distribution data, which may cause our model to incorrectly assign high likelihoods to OoD data, we instead seek architectures that can perform entropy estimation by maximizing the Widely Applicable Information Criteria (WAIC) of the training data. The WAIC score is a Bayesian adjusted metric to calculate the marginal likelihood . This metric has been shown by to be robust towards the pitfall causing likelihood estimation models to assign high likelihoods to OoD data. The score is defined as follows: Here, E[·] and V[·] denote expectation and variance respectively, which are taken over all architectures α sampled from the posterior architecture distribution p(α). Such a strategy captures model uncertainty in a Bayesian fashion, improving OoD detection. Intuitively, minimizing the variance of training data likelihoods allows its likelihood distribution to remain tight which, by proxy, minimizes the overlap of in-distribution and out-of-distribution likelihoods, thus making them separable. Under this objective function, we search for an optimal distribution of network architectures p(α) by deriving the corresponding parameters that characterize p(α). Because the score requires aggregating the from multiple architectures α, optimizing such a score using existing search methods can be intractable, as they typically only consider a single architecture at a time. Later, we will show how to circumvent this problem in our optimization formulation. NADS constructs a layer-wise search space with a pre-defined macro-architecture, where each layer can have a different architecture component. Such a search space has been studied by (; b;), where it shows to be both expressive and scalable/efficient. The macro-architecture closely follows the Glow architecture presented in. Here, each layer consists of an actnorm, an invertible 1 × 1 convolution, and an affine coupling layer. Instead of pre-defining the affine coupling layer, we allow it to be optimized by our architecture search. The search space can be viewed in Figure 1. Here, each operational block of the affine coupling layer is selected from a list of candidate operations that include 3 × 3 average pooling, 3 × 3 max pooling, skip-connections, 3 × 3 and 5 × 5 separable convolutions, 3 × 3 and 5 × 5 dilated convolutions, identity, and zero. We choose this search space to answer the following questions towards better architectures for OoD detection: • What topology of connections between layers is best for uncertainty quantification? Traditional likelihood estimation architectures focus only on feedforward connections without adding any skip-connection structures. However, adding skip-connections may improve optimization speed and stability. • Are more features/filters better for OoD detection? More feature outputs of each layer should lead to a more expressive model. However, if many of those features are redundant, it may slow down learning, overfitting nuisances and ing in sub-optimal models. • Which operations are best for OoD detection? Intuitively, operations such as max/average pooling should not be preferred, as they discard information of the original data point "too aggressively". However, this intuition remains to be confirmed. Having specified our proxy task and search space, we now describe our optimization method for NADS. Several difficulties arise when attempting to optimize this setup. First, optimizing p(α), a distribution over high-dimensional discrete random variables α, jointly with the network parameters is intractable as, at worst, each network's optimal parameters would need to be individually identified. Second, even if we relax the discrete search space, the objective function involves computing an expectation and variance over all possible discrete architectures. To alleviate these problems, we first introduce a continuous relaxation for the discrete search space, allowing us to approximately optimize the discrete architectures through backpropagation and weight sharing between common architecture blocks. We then approximate the stochastic objective by using Monte Carlo samples to estimate the expectation and variance. Specifically, let A denote our discrete architecture search space and α ∈ A be an architecture in this space. Let l θ * (α) be the loss function of architecture α with its parameters set to θ * such that it satisfies θ * = arg min θ l(θ|α) for some loss function l(·). We are interested in finding a distribution p φ (α) parameterized by φ that minimizes the expected loss of an architecture α sampled from it. We denote this loss function as Solving L(φ) for arbitrary parameterizations of p φ (α) can be intractable, as the inner loss function l θ * (α) involves searching for the optimal parameters θ * of a neural network architecture α. Moreover, the outer expectation causes backpropagation to be inapplicable due to the discrete random architecture variable α. We adopt a tractable optimization paradigm to circumvent this problem through a specific reparameterization of the architecture distribution p φ (α), allowing us to backpropagate through the outer expectation and jointly optimize φ and θ. For clarity of exposition, we first focus on sampling an architecture with a single hidden layer. In this setting, we intend to find a probability vector φ = [φ 1, . . ., φ K] with which we randomly pick a single operation from a list of the random categorical indicator vector sampled from φ, where b i is 1 if the i th operation is chosen, and zero otherwise. Note that b is equivalent to the discrete architecture variable α in this setting. With this, we can write the random output y of the hidden layer given input x as To make optimization tractable, we relax the discrete mask b to be a continuous random variableb using the Gumbel-Softmax reparameterization as follows: Here, g 1... g k ∼ − log(− log(u)) where u ∼ Unif, and τ is a temperature parameter. For low values of τ,b approaches a sample of a categorical random variable, recovering the original discrete problem. While for high values,b will equally weigh the K operations . Using this, we can compute backpropagation by approximating the gradient of the discrete architecture α with the gradient of the continuously relaxed categorical random variableb, as With this backpropagation gradient defined, generalizing the above setting to architectures with multiple layers simply involves recursively applying the above gradient relaxation to each layer. With this formulation, we can gradually remove the continuous relaxation and sample discrete architectures by annealing the temperature parameter τ. With this, we are able to optimize the architecture distribution p φ (α) and sample candidate architectures for further retraining, finetuning, or evaluation. By sampling M architectures from the distribution, we are able to approximate the WAIC score expectation and variance terms as: Figure 2: Summary of our architecture search findings: the most likely architecture structure for each block K found by NADS. We applied our architecture search on five datasets: CelebA (Liu et al.), CIFAR-10, CIFAR-100, , SVHN , and MNIST (LeCun). In all experiments, we used the Adam optimizer with a fixed learning rate of 1 × 10 −5 with a batch size of 4 for 10000 iterations. We approximate the WAIC score using M = 4 architecture samples, and set the temperature parameter τ = 1.5. The number of layers and latent dimensions is the same as in the original Glow architecture , with 4 blocks and 32 flows per block. Images were resized to 64 × 64 as inputs to the model. With this setup, we found that we are able to identify neural architectures in less than 1 GPU day. Our findings are summarized in Figure 2, while more samples from our architecture search can be seen in Appendix C. Observing the most likely architecture components found on all of the datasets, a number of notable observations can be made: • The first few layers have a simple feedforward structure, with either only a few convolutional operations or average pooling operations. On the other hand, more complicated structures with skip connections are preferred in the deeper layers of the network. We hypothesize that in the first few layers, simple feature extractors are sufficient to represent the data well. Indeed, recent work on analyzing neural networks for image data have shown that the first few layers have filters that are very similar to SIFT features or wavelet bases . • The max pooling operation is almost never selected by the architecture search. This confirms our hypothesis that operations that discard information about the data is unsuitable for OoD detection. However, to our surprise, average pooling is preferred in the first layers of the network. We hypothesize that average pooling has a less severe effect in discarding information, as it can be thought of as a convolutional filter with uniform weights. • The deeper layers prefer a more complicated structure, with some components recovering the skip connection structure of ResNets . We hypothesize that deeper layers may require more skip connections in order to feed a strong signal for the first few layers. This increases the speed and stability of training. Moreover, a larger number of features can be extracted using the more complicated architecture. Interestingly enough, we found that the architectures that we sample from our NADS perform well in image generation without further retraining, as shown in Appendix D. Using the architectures sampled from our search, we create a Bayesian ensemble of models to estimate the WAIC score. Each model of our ensemble is weighted according to its probability, as in. The log-likelihood estimate as well as the variance of this model ensemble Here,'Baseline' denotes the method proposed by. Subcaptions denote training-testing set pairs. Additional figures are provided in Appendix G. is given as follows: Intuitively, we are weighing each member of the ensemble by their posterior architecture distribution p φ (α), a measure of how likely each architecture is in optimizing the WAIC score. We note that for our setup, V[log p αi (x)] is zero for each model in our ensemble; however, for models which do have variance estimates, such as models that incorporate variational dropout; ), this term may be nonzero. Using these estimates, we are able to approximate the WAIC score in equation. We trained our proposed method on 4 datasets: CIFAR-10, CIFAR-100 , SVHN , and MNIST (LeCun). In all experiments, we randomly sampled an , and Outlier Exposure (OE) (b ensemble of M = 5 models from the posterior architecture distribution p φ * (α) found by NADS. Although these models can sufficiently perform image synthesis without retraining as shown in Appendix D, we observed that further retraining these architectures led to a significant improvement in OoD detection. Because of this, we retrained each architecture on data likelihood maximization for 150000 iterations using Adam with a learning rate of 1 × 10 −5. We first show the effects of increasing the ensemble size in Figure 3 and Appendix F. Here, we can see that increasing the ensemble size causes the OoD WAIC scores to decrease as their corresponding histograms shift away from the training data WAIC scores, thus improving OoD detection performance. Next, we compare our ensemble search method against a traditional ensembling method that uses a single Glow architecture trained with multiple random initializations. As shown in Table 2, we find that our method is superior compared to the traditional ensembling method when compared on OoD detection using CIFAR-10 as the training distribution. We then compared our NADS ensemble OoD detection method for screening out samples from datasets that the original model was not trained on. For SVHN, we used the Texture, Places, LSUN, and CIFAR-10 as the OoD dataset. For CIFAR-10 and CIFAR-100, we used the SVHN, Texture, Places, LSUN, CIFAR-100 (CIFAR-10 for CIFAR-100) datasets, as well as the Gaussian and Rademacher distributions as the OoD dataset. Finally, for MNIST, we used the not-MNIST, F-MNIST, and K-MNIST datasets. We compared our method against a baseline method that uses maximum softmax probability (MSP) , as well as two popular OoD detection methods: ODIN and Outlier Exposure (OE) (b). ODIN attempts to calibrate the uncertainty estimates of an existing model by reweighing its output softmax score using a temperature parameter and through random perturbations of the input data. For this, we use DenseNet as the base model as described in . On the other hand, OE models are trained to minimize a loss regularized by an outlier exposure loss term, a loss term that requires access to OoD samples. As shown in Table 1 and Table 3, our method outperforms the baseline MSP and ODIN significantly while performing better or comparably with OE, which requires OoD data during training, albeit not from the testing distribution. We plot Receiver Operating Characteristic (ROC) and Precision-Recall (PR) curves in Figure 4 and Appendix G for more comprehensive comparison. In particular, our method consistently achieves high area under PR curve (AUPR%), showing that we are especially capable of screening out OoD data in settings where their occurrence is rare. Such a feature is important in situations where anomalies are sparse, yet have disastrous consequences. Notably, ODIN underperforms in screening out many OoD datasets, despite being able to reach the original reported performance when testing on LSUN using a CIFAR10 trained model. This suggests that ODIN may not be stable for use on different anomalous distributions. Unlike NAS for common learning tasks, specifying a model and an objective to optimize for uncertainty estimation and outlier detection is not straightforward. Moreover, using a single model may not be sufficient to accurately quantify uncertainty and successfully screen out OoD data. We developed a novel neural architecture distribution search (NADS) formulation to identify a random ensemble of architectures that perform well on a given task. Instead of seeking to maximize the likelihood of in-distribution data which may cause OoD samples to be mistakenly given a higher likelihood, we developed a search algorithm to optimize the WAIC score, a Bayesian adjusted estimation of the data entropy. Using this formulation, we have identified several key features that make up good uncertainty quantification architectures, namely a simple structure in the shallower layers, use of information preserving operations, and a larger, more expressive structure with skip connections for deeper layers to ensure optimization stability. Using the architecture distribution learned by NADS, we then constructed an ensemble of models to estimate the data entropy using the WAIC score. We demonstrated the superiority of our method to existing OoD detection methods and showed that our method has highly competitive performance without requiring access to OoD samples. Overall, NADS as a new uncertainty-aware architecture search strategy enables model uncertainty quantification that is critical for more robust and generalizable deep learning, a crucial step in safely applying deep learning to healthcare, autonomous driving, and disaster response. A FIXED MODEL ABLATION STUDY | We propose an architecture search method to identify a distribution of architectures and use it to construct a Bayesian ensemble for outlier detection. | 663 | scitldr |
Modern applications from Autonomous Vehicles to Video Surveillance generate massive amounts of image data. In this work we propose a novel image outlier detection approach (IOD for short) that leverages the cutting-edge image classifier to discover outliers without using any labeled outlier. We observe that although intuitively the confidence that a convolutional neural network (CNN) has that an image belongs to a particular class could serve as outlierness measure to each image, directly applying this confidence to detect outlier does not work well. This is because CNN often has high confidence on an outlier image that does not belong to any target class due to its generalization ability that ensures the high accuracy in classification. To solve this issue, we propose a Deep Neural Forest-based approach that harmonizes the contradictory requirements of accurately classifying images and correctly detecting the outlier images. Our experiments using several benchmark image datasets including MNIST, CIFAR-10, CIFAR-100, and SVHN demonstrate the effectiveness of our IOD approach for outlier detection, capturing more than 90% of outliers generated by injecting one image dataset into another, while still preserving the classification accuracy of the multi-class classification problem. Motivation. As modern applications such as autonomous vehicles and video surveillance generate larger amount of image data, the discovery of outliers from such image data is becoming increasingly critical. Examples of such image outliers include unauthorized personnel observed in a secret military base or unexpected objects encountered by self-driving cars on the road. Capturing these outliers can prevent intelligence leaks or save human lives. State-of-the-Art. Due to the exceptional success of deep learning over classical methods in computer vision, in recent years a number of works BID17 BID27 BID5 BID23 leverage the representation learning ability of a deep autoencoder or GAN BID7 for outlier detection. Outliers are either detected by plugging in the learned representation into classical outlier detection methods or directly reported by employing the reconstruction error as the outlier score BID36 BID4. However, these approaches use a generic network that is not trained specifically for outlier detection. Although the produced representation is perhaps effective in representing the common features of the "normal" data, it is not necessarily effective in distinguishing "outliers" from "inliers". Recently, some works BID26 BID21 were proposed to solve this issue by incorporating the outlier detection objective actively into the learning process. However, these approaches are all based on the one-class technique BID28 BID18 BID33 that learns a single boundary between outliers and inliers. Although they perform relatively well when handling simplistic data sets such as MNIST, they perform poorly at supporting complex data sets with multiple "normal" classes such as CIFAR-10 (BID11). This is due to the difficulty in finding a separator that encompasses all normal classes yet none of the outliers. Proposed Approach and Contributions. In this work we propose a novel image outlier detection (IOD) strategy that successfully detects image outliers from complex real data sets with multiple normal classes. IOD unifies the core principles of cutting edge deep learning image classifiers BID7 and classical outlier detection within one framework. Classical outlier detection techniques BID3 BID9 BID1 consider an object as an outlier if its outlierness score is above a certain cutoff threshold ct. Intuitively given a Convolutional Neural Network (CNN) BID12 ) trained using normal training data (namely, data without labeled outliers), the confidence that the CNN has that an image belongs to a particular class could be leveraged to measure the outlierness of the image. This is based on the intuition that we expect a CNN to be less confident about an outlier compared to inlier objects, since outliers by definition are dissimilar from any normal class. By using the confidence as an outlier score, IOD could separate outliers from all normal classes. However, our experiments (Sec. 2) show that directly using the confidence produced by CNN to identify outliers in fact is not particularly effective. This is because the requirements of accurately classifying images and correctly detecting the outlier images conflict with each other. CNN achieves high accuracy in image classification because of its excellent generalization capability that enables a CNN to overcome the gap between the training and testing images. However, the generalization capability jeopardizes the detection of outliers, because it increases the chance of erroneously assigning an outlier image to some class with high confidence to which actually it does not fit. We solve this problem by proposing a deep neural decision forest-based (DNDF) approach equipped with an information theory-based regularization function that leverages the strong bias of the classification decisions made within each single decision tree and the ensemble nature of the overall decision forest. Further, we introduce a new architecture of the DNDF that ensures independence amongst the trees and in turn improves the classification accuracy. Finally, we use a joint optimization strategy to train both the spit and leaf nodes of each tree. This speeds up the convergence. We demonstrate the effectiveness of our outlierness measure, the deep neural forest-based approach, the regularization function, and the new architecture using benchmark datasets including MNIST, CIFAR-10, CIFAR-100, and SVHN -with the accuracy higher than 0.9 at detecting outliers, while preserving the accuracy of multi-class classification. In convolutional neural networks (CNN) BID13, the final Fully Connected (FC) layer contains a single node n i for each target class C i in the model. Here i ∈ {1, 2, ..., m}, where m is the number of target classes. Given an input image, the final FC layer computes a weighted sum score, s i w.r.t. class C i as s i = n j =1 F j w ji, where F j is a feature produced by the convolutional model and w ji is the learned weight of the FC layer that connects F j and node n i. The maximum weighted sum is the largest weighted sum score among all classes, defined as max (s 1, s 2, ..., s m).Using the weighted sum scores as input, a softmax activation function is then applied to generate a class probability p i between 0-1 for each node n i (m i=1 p i = 1). This can be interpreted as relative measurement of how likely it is that one image falls into target class C i. A testing image t will be assigned to class C i if p i = max (p 1, p 2, . . ., p m). p i is called the maximum probability of image t. Intuitively, if an image does not have a good match with any known class, its maximum probability will be small. Since outliers typically do not exhibit many features of a known class as compared to inliers, we postulate the maximum probability of an outlier tends to be small relative to inliers. Therefore, it is natural to use the maximum probability or the corresponding maximum weighted sum score as a confidence measure for images. Confidence-based Outlier Detection. Next, we propose a method that uses this confidence to detect outliers. An outlierness threshold ct is established in the training process using the training image x k which has the kth smallest confidence among all objects in the training set. Then given a testing image, it will be considered as an outlier if its confidence is (b) Our improved score using max route probability distribution Figure 1: Max weighted sum VS max route probability (tested on CIFAR-10 model).smaller than the confidence of this selected x k. One advantage of this approach is that unlike the state-of-the-art image outlier detection work BID26 BID27 we do not have to explicitly set the outlierness cutoff threshold ct at the cost of introducing a different hyperparameter k. However, k is a more intuitive parameter to set than ct, because k can be set by simply assuming a percentage of the outliers in the training data. Our experiments show that this straightforward approach alone does not work very well, because many of the outlier images do have a large maximum probability (maximum weighted sum). We illustrate this with an example. First, we trained a CIFAR-10 model using the CIFAR-10 dataset BID11. We then used the model to classify the CIFAR-10 and CIFAR-100 testing data. Each dataset contains 60,000 images. Ideally, all CIFAR-100 images should be detected as outliers, because the CIFAR-100 and CIFAR-10 images contain disjoint classes. Fig. 1(a) shows the maximum weighted sum w.r.t. the CIFAR-10 and CIFAR-100 datasets when tested on the CIFAR-10 model. The CNN network is identical to the network we described in our experiment section (Sec. 5.2). Note we demonstrate the of using maximum weighted sum instead of maximum probability, because our experimental evaluation shows that maximum weighed sum performs bettern than maximum probability in detecting outliers. The black vertical line represents the cutoff threshold ct. In this case, k is set as 5000. We expect nearly 100% of CIFAR-100 images to fall to the left of the black dashed line and hence be captured as outliers. However, more than 30% CIFAR-100 images have large maximum weighted sum scores and are not correctly classified as outliers. Thus, the accuracy of this outlier detection scheme is less than 0.7.Analysis. The low accuracy of relying on the maximum probability or maximum weighted sum to detect outliers is caused by the contradictory requirements of accurately classifying images and effectively detecting outliers. It is well known that to accurately classify images, the image classifier has to have excellent generalization capability such that the gap between the training and testing images can be overcome -in other words avoiding overfitting. For this, regularization functions such as data augmentation BID30, random dropout BID31, or weight decay BID14 are commonly used to improve the generalization of CNNs. However, these generalization methods inevitably jeopardize the outier detection capability of the model. This is because some images will be classified to one class even if they are quite different from the common features of that class extracted during training. Yet instead such images might be outliers -generalization tends to blur the boundary between normal images and outliers. Therefore, high confidence may be assigned to image outliers as shown in above use case. To address this shortcoming of simply using the maximum class probability as the outlier score, we propose a deep neural decision forest-based approach that harmonizes the contradictory requirements of accurate image classification and effective outlier detection in one network structure. It does this by taking advantage of the strong bias of the classification decisions made within each tree and the ensemble nature of the decision forest. In this section, we first introduce the deep neural forest model BID10. We then show how to use the maximum route of each tree to distinguish outliers from inliers. Fully Connected LayersFinal Fully Connected Layer DISPLAYFORM0 The deep neural forest BID10 combines the representation learning of deep convolutional networks and the divide-and-conquer principle of decision trees. The key idea is to introduce a back-propagation compatible version of stochastic decision trees. As depicted in FIG1, the deep neural forest is composed of three key components: a deep CNN, decision nodes (split nodes), and prediction nodes (leaf nodes). The deep CNN component holds parameters from all convolutional modules and the FC layers except for the final FC layer. Decision nodes indexed by N are the internal nodes of the tree. Each decision node is connected to one output node of the final FC layer. Prediction nodes indexed by L are the leaf nodes of the tree. Each decision node n ∈ N is assigned a decision function d n (.; Θ): X → parametrized by Θ. Each decision node n classifies a sample x based on its features and routes it left or right down the tree. Note that the routing decision d n is probabilistic; formally d n (x ; Θ) is defined as: DISPLAYFORM0 where σ(x) = (1 + e −x) −1 is the sigmoid function and function f n (.; Θ) is a linear output unit provided by the FC layer of a deep network parametrized by Θ. Therefore, it is f n (.; Θ) that connects the neural network and the decision tree. The function is turned into a probabilistic routing decision in the range by applying the sigmoid activation. Each prediction node l ∈ L holds a probability distribution π l over the output space Y. Once a sample ends at a leaf node l, a prediction is given by π l. In the case of stochastic routing, the leaf predictions will be averaged by the probability of reaching the leaf. The final prediction for sample x from tree T with decision nodes parametrized by Θ is given by DISPLAYFORM1 where π = (π l) l∈L and π ly denotes the probability that a sample reaching leaf l belongs to class y. µ l (x |Θ) is considered as the routing function. It produces the probability that sample x will reach leaf l. l µ l (x |Θ) = 1 for all x ∈ X. Since π l is not influenced by the the input x, essentially the routing probability (µ l) determines the class of x. Based on the above description, given a sample image x, each decision tree in the forest produces a probability distribution µ(x |Θ) over each route, or path from root-to-leaf, where µ l (x |Θ) represents the probability that sample x reaches leaf l. More specifically, µ l (x |Θ) is expressed as follows: DISPLAYFORM0 whered n (x ; Θ) = 1 − d n (x ; Θ), and 1 l n is 1 if leaf l belongs to the left subtree of node n and 0 otherwise. Similarly, 1 l n is 1 if leaf l belongs to the right subtree of node n and 0 otherwise. As shown in Eq. 3, the probability of a route is computed as the product of the probabilities w.r.t. all split nodes on that route. Therefore, given an image x, the probability distribution of the routes is expected to be very skewed and biased to one particular route. A given route will have an extremely small probability if x does not fit the features represented by the split nodes on this route learned through the deep CNN, because the product of multiple small probabilities will diminish quickly. Typically just one route will stand out if all its split nodes match the features of x well. We call this route the max route, because it has the maximum probability. It determines the class of x. The probability of the max route (or max route probability) can be used to measure how confident the classifier is about its classification decision of image x. The larger the max route probability is, the more confident the classifier is about the image. More specially, given an image x and a decision tree T, the confidence is measured as: DISPLAYFORM1 where max {µ l (x |Θ)|l ∈ L} denotes the max route probability of decision tree T for the given image x. Since a forest is an ensemble of decision trees, the final confidence of image x is measured as: DISPLAYFORM2 Discussion of the Effectiveness in Outlier Detection. Intuitively this max route probability can be expected to be more effective than the maximum weighted sum of the classes of the CNN at separating outliers, as an outlier image will not have a good match with every split node on the max route. So its max route probability is limited by the product operation in the computation. In contrast, the maximum weighted sum in CNN tends to fall off much more slowly because the score is computed based on the linear combination of multiple features, such that a single matching feature can make the score high. This is also confirmed in our experimental studies. To further improve the effectiveness of using the max route probability to detect outliers, we introduce a regularization to prevent the generation of routing decision whose probability distribution is close to uniform. We use a information theory-based approach. That is, we penalize the routing decision whose probability distribution has a large entropy, because a uniform distribution has a large entropy. This ensures that the max route probability of each routing decision always stands out, making it more effective at detecting outliers. More specifically, given a decision tree T, |L| routing options exist for each sample x ∈ X reaching different prediction nodes. The entropy of the route probability distribution of sample x is given by: DISPLAYFORM0 where µ i (x |Θ) denotes the probability that sample x reaching leaf l i. However, the learning process does not converge during runtime if we directly apply this penalty function on the routing probability distribution due to numerical instability. We solve this problem by applying a softmax function on the probability distribution as a normalization. The revised entropy of the routing probability distribution is then given by: DISPLAYFORM1 where DISPLAYFORM2 To penalize the routing whose probability distribution has a large entropy, we add the entropy w.r.t. each training sample to the log-loss term. Given a sample x ∈ X and an output distribution y ∈ Y, the log-loss term of one decision tree T is represented as: DISPLAYFORM3 where β controls the strength of the penalty and DISPLAYFORM4 Therefore, the total log-loss for the random forest composed of |T | trees is defined as: Fig. 1(b) shows the max route probability distributions w.r.t. CIFAR-10 and CIFAR-100 tested on the CIFAR-10 model. By setting k as 5000, we get a max route probability cutoff threshold ct = 0.99864. As confirmed, 95% of CIFAR-100 images have their max route probabilities smaller than this large ct. Therefore, it achieves an outlier detection accuracy around 0.95, making it much more effective than using a threshold on CNN class weights. DISPLAYFORM5 Although skewness in the classification probability distribution (for example, the output of softmax in CNN) can benefit the detection of outlier images, it is known to hurt the generalization of image classifiers due to the high risk of overfitting BID24. The situation seems worse for our deep neural decision forest-based approach, because the product operation in the routing probability computation makes the probability distribution of the routing more skewed compared to the output of softmax in CNN.We overcome this in our decision forest-based approach as a of the ensemble nature of the decision forest, allowing us to maintain good generalization performance. This is because, although one tree may overfit some aspects of the images, the ensemble provides a tool to make a composite prediction so that the overfitting in each individual tree is overcome. More specifically, as an ensemble of decision trees F = {T 1, . . ., T k}, a forest F delivers a prediction for a sample x by averaging on the output of each tree, i.e., DISPLAYFORM0 Enhancement of Generalization Capability. Next, we introduce a new architecture of the deep neural decision forest to further enhance its generalization capability. As shown in FIG1, although in BID10, different trees are connected to different sets of nodes of the final FC layer, the trees still share the other FC layers of the CNNs. This violates the principle of random decision forest where the trees in a forest should be as independent as possible so that that they can compensate each other. That is why decision forests tend to show excellent generalization capability. Therefore, we design a new architecture FIG2, Appendix A). It divides all FC layers into k independent components, each of which is connected to one individual tree. This ensures independence amongst the trees and in turn improves the classification accuracy. In summary, regularization and our new architecture together enable our deep neural decision forest-based approach to provide high accuracy at image classification while also assuring effective outlier detection, as we demonstrate in our experiments (Sec. 5).In Appendix B, we show how to train the deep neural decision tree. The training requires estimating both the decision node parametrizations θ and the leaf predictions π. In BID10 the minimum empirical risk principle under log-loss is adopted for estimation by using a two-step optimization strategy, where θ and π are updated alternatively to minimize the log-loss. In this work we use a new form of prediction nodes that makes the deep neural forest fully differentiable. This enables us to abandon the two-step optimization strategy and instead optimize the Θ and Π parameters jointly in one step through back-propagation. This back-propagation based learning process is also described in Appendix B. Datasets. We empirically demonstrate the effectiveness of our proposed image outlier detection (IOD) strategy on several benchmark image datasets. Specifically, we train models on CIFAR-10 (BID11) and MNIST datasets. Given a trained model on one data set, we consider examples from other datasets as outliers when testing the model. For example given a CIFAR-10 model, images in CIFAR-100, SVHN BID20, and MNIST datasets are outliers, because the images in these datasets have features distinct from those common to the CIFAR-10 images captured by the classification model. We evaluate: IOD-IOD-Weighted-Sum described in Sec. 2. The weighted sum score in the final FC layer of the traditional CNN is utilized as the confidence measure; IOD-IOD-Max-Route-Shared-FC method based on the original deep neural forest architecture proposed in BID10 in which each tree shares the FC layers as described in Sec. 3.1. Here, the max route is used as the confidence measure of IOD; IODMax-Route-Different-FC: it uses our new deep neural forest architecture. Different trees are connected to isolated FC layers. Similarly the max route is used as the confidence measure; IOD-Max-Route-Penalty: it uses our new deep neural forest architecture with a regularization term applied to the loss function (Eq. 8); Deep SVDD BID26: the state-of-art fully deep once-class method described in Sec. 6; AnoGAN BID27: the state-of-art generative approach based on GAN. We compare against Deep SVDD and AnoGAN because they focus on image datasets. Experimental Setup. We ran experiments on a GPU. All IOD models are implemented based on Pytorch BID22. For Deep SVDD and AnoGAN we reuse the codes provided by their authors. The code and models will be made public available via Github. Metrics. We measure the accuracy of outlier detection and the classifation accuracy of our IOD-based approaches. Parameter Settings. We set learning rates manually. The detailed settings of the parameters are provided in each subsection. All networks are trained using mini-batches of size 128. The momentum is set to 0.9 for all models. The weighted decays are set to 0.0001. We do not use data augmentation. We use simple global contrast normalization to pre-process all images. When testing CIFAR-10, CIFAR-100, and SVHN datasets on the model trained for MNIST, we change the image to gray scale and take central crops of the images. On the other hand, when testing MNIST on models trained for other datasets, we increase its color channel from 1 to 3 by copying the original gray image 3 times. The CIFAR-10 dataset BID11 ) is composed of 10 classes of natural images with 50,000 training images and 10,000 testing images. Each image is an RGB image of size 32 × 32.We use a deep NN composed of 10 convolutional layers and 2 FC layers with 1024 and 384 hidden units correspondingly. For the IOD-Weighted-Sum method, a 10-way linear layer is used for the final prediction. For the other three max route-based approaches, the output is connected to a deep neural forest containing 20 depth-3 trees. Specifically, for the IOD-Max-Route-Shared-FC method, the output of the first FC layer using 384 hidden units is connected to the second FC layer with 300 (= 20 × (2 (3 +1) − 1 )) hidden nodes. It is then connected to 20 depth-3 trees. For the IOD-Max-Route-Different-FC and the IOD-Max-Route-Penalty methods, the output of the convolutional layer is broadcast to 20 different sets of FC layers. Each set contains 2 FC layers using 1024 and 384 hidden units. Each connects to the final FC layer with 15 (= 2 (3 +1) − 1 ) hidden nodes. This final FC layer is then connected to a tree. The learning rate is initialized as 0.1 and eventually decays to 0.001. For DEEP SVDD and AnoGAN we use the settings recommended by the authors. The parameter k is set as 5000. As shown in Sec. 2 k is used to establish an outlierness cut off threshold ct. For our own IOD approach, a testing image is an outlier if its maximum weighted sum score or max route probability is smaller than ct. For DEEP SVDD and AnoGAN an image is an outlier if its outlierness score is larger than the corresponding ct. In comparison to the state-of-the-art TAB0, our IOD-Max-Route-Penalty method significantly outperforms Deep SVDD and AnoGAN in all cases. Deep SVDD performs poorly, because as a one-class method it is not good at separating outliers from inliers that belong to multiple normal classes. AnoGAN only works well in detecting the MNIST images as outliers which are significantly different from CIFAR-10, while fails in all other cases when the outliers share more features with the inliers. As for our own IOD-based methods, our max route-based methods significantly outperform IOD-Weighted-Sum that uses weighted-sum as confidence measure to detect outliers, without giving up our ability to correctly classify the CIFAR-10 images. The performance gain from the deep neural forest architecture, in which the confidence w.r.t. each image is computed as the multiplication of the routing probabilities produced at the decision nodes on the route. Compared to the weighted sum approach, this multiplication of probabilities enlarges the confidence gap. Thus it is able to better detect outliers than the weighted sum approach. In addition, two of our max route-based methods (IOD-Max-Route-Shared-FC and IOD-Max-Route-Different-FC) achieve even better classification accuracy compared to the weighted sum-based method that uses the classical CNN architecture. The IOD-Max-Route-Penalty method outperforms the other two IOD-Max-Route based methods in detecting outliers in almost all cases, especially in detecting MNIST outliers. This is because by introducing regularization to penalize routing decisions that have large entropy, IOD-Max-Route-Penalty avoids uniform route probability distributions and thus leads to larger maximum route probabilities for inliers. At the same time, its classification accuracy decreases a little bit, because the regularization might introduce overfitting BID32. However, it is effectively alleviated because of the ensemble of the decision trees. Therefore, the drop on the classification accuracy is almost negligible. Moreover, IOD-Max-Route-Different-FC consistently outperforms IOD-Max-Route-Shared-FC in both outlier detection and classification. This demonstrates the effectiveness of our new proposed deep neural forest architecture as compared to the original version. This new architecture has a large capacity in detecting outliers and classifying images because of the fully isolated decision trees in the forest. Due to space limitation, please refer to Appendix C for the on MNIST. Classical Outlier Detection. Outlier detection has been extensively studied in the literature BID3 BID9 BID25 BID1. These outlier detection techniques in general share one common principle. Namely, an object is an outlier if its outlierness score is above a certain cutoff threshold. This principle is also leveraged in our context. However, unlike these works BID3 BID9 BID25 BID1, the outlierness score is not measured in the original feature space. This avoids the similarity search problem in the high dimensional image data. One-Class Classification. One-class Support Vector Machine (OC-SVM) and Support Vector Data Description (SVDD) methods BID28 BID18 BID33 use only normal training data to separate outliers from normal data based on kernel-based transformations. In particular, OC-SVM uses a hyperplane BID28 BID18 to separate the target objects from the origin with maximal margin. It is based on the assumption that the origin of a kernel-based transformed representation belongs to the outlier class. SVDD is a different kernel SVM method in which data is enclosed in a hypersphere of radius R in the transformed feature space. The squared radius is minimized together with penalties for margin violation. However, these approaches are shown to be extremely sensitive to specific choices of the representation, kernel, and the hyper-parameters. The difference in performance can be dramatic based on these choices BID19. Therefore, these methods are not robust. In addition, they assumed that all normal data belongs to one single class. This does not fit our scenario in which the applications generate rich classes of images. Shallow Image Outlier Solutions. In BID29, the authors proposed an unsupervised image outlier method. It first extracts a set of image features from the raw pixels and several combinations of image transforms. Then, the mean and the variance values of each feature are computed over all images in the dataset. An image is considered as an outlier if the values of its features significantly deviate from their mean values. This approach performs poorly even on the simplistic MNIST due to its limited feature extraction ability. In BID8, the authors proposed a probabilistic PCA model focusing on the extraction of the features that in accurate image reconstruction. During this process, the outliers are identified and removed to improve the performance of the PCA model. Therefore, the target of this work is on extracting the most typical features instead of outlier detection. Deep One-class methods. Research efforts have been put on enhancing one-class classification with the representation learning ability of deep neural network. In BID23; BID5, "mix" methods have been proposed that detect outliers by first learning a representation using deep learning and then feed that into a classical shallow outlier detection method. Recently, "fully deep" methods BID26 BID21 were proposed that produce representations and the boundary separating outliers from inliers (hyperplane in OC-SVM and hypersphere in SVDD) within one learning process. These methods outperform the mix methods according to BID26. Unfortunately, these methods still suffer from the fundamental problem of one-class classification, namely they only find a single boundary between outliers and inliers. However, real datasets tend to have multiple classes of normal images. For multi-class datasets, it is difficult to find a separator that encompasses all normal classes and none of the outliers. As confirmed in our experiments, our IOD method significantly outperforms the one-class approaches when handling complex image datasets such as CIFAR-10.Deep Autoencoder-based Methods. Deep autoencoder has been broadly used to detect outliers. It is based on the observation that these networks are able to extract the common factors of variation from normal samples and reconstruct them accurately, while anomalous samples do not contain these common factors of variation and thus cannot be reconstructed accurately. The deep autoencoders can be used to detect outliers in a mixed approach BID5, namely by plugging in the learned embeddings into classical outlier detection methods. It can also be directly used to detect outliers by employing the reconstruction error as the outlier score BID36 BID4. However, the objective of autoencoders is dimension reduction. It does not target on outlier detection directly. Therefore, the learned low dimensional representation is not necessarily effective in distinguishing outliers from inliers. Furthermore, choosing the right degree of compression is also difficult when applying autoencoders for outlier detection. GAN-based Methods. The GAN-based method BID27 ) (AnoGAN) first trains a GAN to generate samples according to the training data. A test object is an outlier if it cannot find a close point in the generator's latent space. However, similar to autoencoders, generative approaches have difficulty in regularizing the generator for compactness. Deep Statistical Methods. The deep statistical methods detect outliers by directly modeling the data distribution with the help of deep structures such as autoencoder. In particular, in BID37 the Deep Autoencoding Gaussian Mixture Model (DAGMM) method uses a deep autoencoder to generate a low-dimensional representation and reconstruction error which are then used to build a Gaussian Mixture Model (GMM). The parameters of the deep autoencoder and the mixture model are jointly optimized in an end-to-end fashion. Similarly, in BID35 the deep structured energy based models (DSEBMs) detects outliers by connecting the autoencoder and the energy based model (EBM). However, the statistical methods detect outliers purely based the density of the objects. The model might assign high density to the outliers if there are many proximate outliers, hence ing in false negative. Open Set Deep Network. a method was introduced to adapt Convolutional Neural Networks to discover a new class by adding one additional class at the softmax layer. It leverages the weighted sum score to determine whether one image belongs to the new class. However, its techniques are tightly coupled with one specific network architecture, namely AlexNet BID12. Our IOD framework instead can accommodate any type of image classifiers. Learning from Noisy Labeled Image Data. In BID34, the authors studied the problem of training CNNs that are robust to a large number of noisy labels. The idea is to extend CNNs with a probabilistic model, which infers the true labels from the noisy labels and then uses them as clean labels in the training of the network. This assumes that the noisy labels or outliers are already known. Therefore, this is not only totally different from our outlier detection problem, but also not practical in most applications where outliers are unknown and unexpected phenomenon. In this work we propose a novel approach that effectively detects outliers from image data. The key novelties include a general image outlier detection framework and effective outlierness measure that leverages the deep neural decision forest. Optimizations such as new architecture that connects deep neural network and decision tree and regularization to penalize the large entropy routing decisions are also proposed to further enhance the outlier detection capacity of IOD. In the future we plan to investigate how to make our approach work in multi-label classification setting. Convolutional Modules DISPLAYFORM0 DISPLAYFORM1 Fully Connected Layers − (Learning a deep neural decision tree requires estimating both the decision node parametrizations θ and the leaf predictions π. In BID10 the minimum empirical risk principle under log-loss is adopted for their estimation by using a two-step optimization strategy, where θ and π are updated alternatively to minimize the log-loss. In this section we first introduce a new form of prediction nodes that makes the deep neural forest fully differentiable. This enables us to abandon the two-step optimization strategy and instead optimize the Θ and Π parameters jointly in one step through back-propagation. Next, we show the back-propagation based learning process. The prediction node used in BID10 holds a probability distribution π l over Y and are optimized alternately with the decision nodes d. Specifically, the optimal predictions for all prediction nodes can be obtained by minimizing a convex objective given fixed decision nodes, while the parameters in the decision nodes are trained through the back-propagation process in each training epoch. Obviously, this two-step strategy is not efficient. Here, we use new forms of prediction nodes that makes the decision forest fully differentiable and therefore can be optimized jointly with the decision nodes as shown in Sec. B.2.Each prediction node is parametrized using a k-dimensional parametric probability distribution w l, denoted as: DISPLAYFORM0 where k is the number of classes. The softmax function takes a vector of real-valued scores and converts it to a vector of values between 0 and 1 that sum to one. In Sec. B.2, we will show that using the new prediction nodes, the final loss function is still convex. Learning the random forest model requires us to find a set of parameters Θ and π that can minimize the total log loss defined in Eq. 9. To minimize Eq. 9, in fact we only need to independently minimize the penalized loss (Eq. 8) of each individual tree. Next, we show that the loss function is fully differentiable. Therefore, we are able to employ a Stochastic Gradient Descent (SGD) approach to minimize the loss w.r.t. Θ and π, following the common practice of back-propagation in deep neural networks. Learning Decision Nodes by Back-Propagation. Given a decision tree, the gradient of the loss L with respect to Θ can be decomposed by the chain rule as follows: DISPLAYFORM0 Here, the derivative of the second part ∂fn (x ;Θ) ∂Θ is identical to the back-propagation process of traditional CNN modules and thus is omit here. Now let's show how to compute the first part: DISPLAYFORM1 where DISPLAYFORM2 and DISPLAYFORM3 By using the chain rule DISPLAYFORM4 where DISPLAYFORM5 and DISPLAYFORM6 where DISPLAYFORM7 Learning Prediction Nodes by Back-Propagation. Given a decision tree, the gradient of the Loss L w.r.t. the weights w of the prediction nodes (defined in Eq. 11) can be decomposed by the chain rule as follows: DISPLAYFORM8 where DISPLAYFORM9 and DISPLAYFORM10 DISPLAYFORM11 C Experiment Results on MNIST MNIST consists of 28 × 28 pixel grayscale images of handwritten digits from 0 to 9. There are 60,000 training images and 10,000 testing images in the dataset. In this case we use a neural network composed of 3 convolutional layers and one FC layer with 625 hidden units. Similar to the CIFAR-10 model, the weighted-sum method uses a 10-way softmax layer for the final prediction, while the max route-based methods connect the neural network to a decision forest containing 5 depth-8 trees. The initial learning rate is set as 0.0001 for the weighted-sum method and 0.001 for other max route-based methods. A multi-step decay is then applied in 50 epochs. The number of outliers in MNIST is set to 200 (k = 200) to get the confidence cutoff threshold (ct) (Sec. 2). BID26 94.73% 94.565% 94.49% XX% XX% AnoGAN BID27 96.50% 95.10% 88.67% XX% XX%As shown in TAB2, similar to the CIFAR-10 experiments, our max route-based IOD methods significantly outperforms the weighted-sum based IOD method in both outlier detection and classification due to the same reason discussed in the CIFAR-10 experiments. As for our three max route-based IOD approaches, the IOD-Max-Route-Penalty approach again outperforms the other two approaches in outlier detection in all cases, while only negligible classification accuracy is sacrificed. Moreover, IOD-Max-Route-Different-FC consistently outperforms IOD-Max-Route-Shared-FC in both outlier detection and classification because of the new deep neural forest architecture. Deep SVDD and AnoGAN perform well on this simplistic MNIST dataset, because the testing datasets (CIFAR-10, CIFAR-100, and SVHN) which are treated as outliers are significantly different from MNIST as normal data. In BID6, a new theoretical framework was proposed that casts dropout used in training deep neural networks as approximate Bayesian inference in deep Gaussian processes. A direct of this theory gives us a tool to model uncertainty with dropout NNs. As discussed in BID6, dropout is applied during inference. Given an image, we can get a range of softmax input values for each class by measuring 100 stochastic forward passes of the softmax input. If the range of the predicted class intersects that of other classes, then even though the softmax output is arbitrarily high (as far as 1 if the mean is far from the means of the other classes), the uncertainty of the softmax output can be as large as the entire space. In other words, the size of intersection signifies the model's uncertainty in its softmax output value -i.e., in the prediction. The larger the intersection is, the more uncertain the model is in its prediction. Therefore, this uncertainty could be used as a score to measure the outlierness of the image. Therefore, we evaluates this dropout method as an additional baseline method. Evaluation on CIFAR-10 Data. We use the identical network architecture as suggested in the author's github repository (https://github.com/yaringal/ DropoutUncertaintyCaffeModels/tree/master/cifar10_uncertainty).When the method is applied on the CIFAR-10 training data, for each CIFAR-10 image, we forward it in the model 100 times and record the softmax input values for each class. The "outlierness" (uncertainty) of the image is defined as the intersection between predict class and all other classes. We use the 5000-th largest value in the training as the cutoff threshold. Specifically, if an image has an uncertainty larger than the threshold, it is considered to be as an outlier. Then we forward each CIFAR-100 image 100 times in the model and record the outlier score for each CIFAR-100 images. Unfortunately, the accuracy of this outlier detection scheme is only 53%. That is, this is even worse than our maximum weighted sum baseline which has an accuracy of 70%.Evaluation on MNIST Data. We also applied the Dropout method on MNIST training images. Again we use the network architecture suggested by the authors in their repository. We forward each MNIST image 100 times in the model and compute the outlierness score. However, when we use the 200-th largest outlierness score in MNIST training as the outlierness cutoff threshold (the same parameter setting to our MNIST experiment in Appendix C), the accuracy in detecting CIFAR-10 images (also forward 100 times in the model) as outliers is lower than 10%. When we increase the parameter from 200 to 5000, its accuracy in detecting outliers increases to 48.13%, which is still much much lower than our proposed method (above 90%), although clearly in this case the parameter setting biases to Dropout method. We evaluate how the input parameter k influences the accuracy of outlier detection. As discussed in Sec. 2, in our IOD-based method, k is used to establish a confidence cut off threshold that corresponds to the kth smallest maximum route probability or maximum weighted sum among the training objects. While in Deep SVDD and AnoGAN, it corresponds to the kth largest outlierness score among the normal objects. In our IOD-based method, given a testing image, if its maximum weighted sum or maximum route probability is smaller than the cutoff threshold, it is considered as an outlier. In Deep SVDD and AnoGAN, a testing image is considered as an outlier if its outierness score produced by Deep SVDD or AnoGAN is larger than the corresponding cutoff threshold. FIG3 shows the on the CIFAR-10 model. Our IOD-based IOD-Max-RoutePenalty (Max-Penalty) method significantly outperforms the state-of-art SVDD and AnoGAN methods in all cases. When k decreases, all methods have a lower accuracy of outlier detection. This is expected, because a smaller k produces a smaller cut off threshold in our IOD-based method and a larger cut off in Deep SVDD and AnoGAN, both leading to the detection of smaller number of outliers. Similarly, as shown in FIG4, our IOD-Max-Route-Penalty (Max-Penalty) consistently outperforms all other methods in almost all cases when detecting outiers out of the simpler MNIST model. The only exception is that AnoGAN is slightly better than Max-Penalty when detecting CIFAR-10 outliers out of MNIST model. Isolation Forest Liu et al. FORMULA2 is a widely used unsupervised outlier detection method. It builds an ensemble of isolation trees for a given data set. Then the average path lengths on the isolation trees is used as the outlierness measurement. The shorter the average path length is, the more likely the instance is to be an outlier. Evaluation on CIFAR-10 Data. First, we used a CNN to extract features from the raw image data and then build an Isolation Forest on these extracted features to detect outliers. As one input parameter of Isolation Forest, the number of outliers in CIFAR-10 is set as 5000 -identical to the parameter k used in our IOD-based methods. More specifically, similar to our IOD-based approach, we use the 5,000th smallest average path length in CIFAR-10 as the cutoff threshold. If the average path length of a testing image is smaller than the cutoff threshold, it is considered an outlier. When the model is trained on CIFAR-10, we expect all images in CIFAR-100, MNIST and SVHN to be detected as outliers. However, in fact the outlier detection accuracy is poor -less than 2% in all cases, although we have carefully tuned the size of the network from producing 2042 dimensional feature vector to 512 dimensional feature vector and tuned other parameters in Isolation Forest such as the number of the trees and max_sample. We also tried applying dimensionality reduction techniques to reduce the dimension of the extracted feature vectors and then applying Isolation Forests on the lower dimension space. The were slightly better, although the detection rate for outliers is still lower than 2%.Finally, we directly applied Isolation Forests on the raw image. The outlier detection accuracy with respect to CIFAR-100, SVHN, and MNIST was 9.73%, 13.14% and 8.3% respectively. | A novel approach that detects outliers from image data, while preserving the classification accuracy of image classification | 664 | scitldr |
This paper introduces CloudLSTM, a new branch of recurrent neural models tailored to forecasting over data streams generated by geospatial point-cloud sources. We design a Dynamic Point-cloud Convolution (D-Conv) operator as the core component of CloudLSTMs, which performs convolution directly over point-clouds and extracts local spatial features from sets of neighboring points that surround different elements of the input. This operator maintains the permutation invariance of sequence-to-sequence learning frameworks, while representing neighboring correlations at each time step -- an important aspect in spatiotemporal predictive learning. The D-Conv operator resolves the grid-structural data requirements of existing spatiotemporal forecasting models and can be easily plugged into traditional LSTM architectures with sequence-to-sequence learning and attention mechanisms. We apply our proposed architecture to two representative, practical use cases that involve point-cloud streams, i.e. mobile service traffic forecasting and air quality indicator forecasting. Our , obtained with real-world datasets collected in diverse scenarios for each use case, show that CloudLSTM delivers accurate long-term predictions, outperforming a variety of neural network models. Point-cloud stream forecasting aims at predicting the future values and/or locations of data streams generated by a geospatial point-cloud S, given sequences of historical observations . Example data sources include mobile network antennas that serve the traffic generated by ubiquitous mobile services at city scale (b), sensors that monitor the air quality of a target region , or moving crowds that produce individual trajectories. Unlike traditional spatiotemporal forecasting on grid-structural data, like precipitation nowcasting or video frame prediction , point-cloud stream forecasting needs to operate on geometrically scattered sets of points, which are irregular and unordered, and encapsulate complex spatial correlations. While vanilla Long Short-term Memories (LSTMs) have modest abilities to exploit spatial features , convolution-based recurrent neural network (RNN) models, such as ConvLSTM and PredRNN++ , are limited to modeling grid-structural data, and are therefore inappropriate for handling scattered point-clouds.: Different approaches to geospatial data stream forecasting: predicting over input data streams that are inherently grid-structured, e.g., video frames using ConvLSTMs (top); mapping of pointcloud input to a grid, e.g., mobile network traffic collected at different antennas in a city, to enable forecasting using existing neural network structures (middle); forecasting directly over point-cloud data streams using historical information (as above, but without pre-processing), as proposed in this paper (bottom). permutations for the features . Through this, the proposed PointCNN leverages spatial-local correlations of point clouds, irrespective of the order of the input. Notably, although these architectures can learn spatial features of point-clouds, they are designed to work with static data, thus have limited ability to discover temporal dependencies. Next, we describe in detail the concept and properties of forecasting over point cloud-streams. We then introduce the DConv operator, which is at the core of our proposed CloudLSTM architecture. Finally, we present CloudLSTM and its variants, and explain how to combine CloudLSTM with Seq2seq learning and attention mechanisms, to achieve precise forecasting over point-cloud streams. We formally define a point-cloud containing a set of N points, as S = {p 1, p 2, · · ·, p N}. Each point p n ∈ S contains two sets of features, i.e., p n = {ν n, ς n}, where ν n = {v 1 n, · · ·, v H n} are value features (e.g., mobile traffic measurements, air quality indexes, etc.) of p n, and ς n = {c 1 n, · · ·, c L n} are its L-dimensional coordinates. At each time step t, we may obtain U different channels of S by conducting different measurements denoted by S Note that, in some cases, each point's coordinates may be unchanged, since the data sources are deployed at fixed locations. An ideal point-cloud stream forecasting model should embrace five key properties, similar to other point-cloud applications and spatiotemporal forecasting problems (a;): (i) Order invariance: A point cloud is usually arranged without a specific order. Permutations of the input points should not affect the output of the forecasting (a). (ii) Information intactness: The output of the model should have exactly the same number of points as the input, without losing any information, i.e., N out = N in. (iii) Interaction among points: Points in S are not isolated, thus the model should be able to capture local dependencies among neighboring points and allow interactions (a). (iv) Robustness to transformations: The model should be robust to correlation-preserving transformation operations on point-clouds, e.g., scaling and shifting (a). (v) Location variance: The spatial correlations among points may change over time. Such dynamic correlations should be revised and learnable during training . In what follows, we introduce the Dynamic Point Cloud Convolution (DConv) operator as the core module of the CloudLSTM, and explain how DConv satisfies the aforementioned properties. The Dynamic Point Cloud Convolution operator (DConv) generalizes the ordinary convolution on grids. Instead of computing the weighted summation over a small receptive field for each anchor point, DConv does so on point-clouds, while inheriting desirable properties of the ordinary convolution operation. The vanilla convolution takes U in channels of 2D tensors as input, and outputs U out channels of 2D tensors of smaller size (if without padding). Similarly, the DConv takes U in channels of a point-cloud S, and outputs U out channels of a point-cloud, but with the same number of elements as the input, to ensure the information intactness property (ii) discussed previously. For simplicity, we denote the i th channel of the input set as S and S j out are 3D tensors, of shape (N, (H +L), U in ) and (N, (H +L), U out ) respectively. We also define Q K n as a subset of points in S i in, which includes the K nearest points with respect to p n in the Euclidean space, i.e., Q n is the k-th nearest point to p n in the set S i in. Note that p n itself is included in Q K n as an anchor point, i.e., p n ≡ p 1 n. Recall that each p n ∈ S contains H value features and L coordinate features, i.e., p n = {ν n, ς n}, where Here, each w k is a set of weights w with index k (i.e., k-th nearest neighbor) in Eq. 2, shared across different p. in S i in, the DConv sums the element-wise product over all features and points in Q K n, to obtain the values and coordinates of a point p n in S j out. Note that we assume the value features are related to their positions at the previous layer/state, to better exploit the dynamic spatial correlations. Therefore, we aggregate coordinate features c(p In the above, we define learnable weights W as 5D tensors with shape . The weights are shared across different anchor points in the input map. Each element w m,m,k i,j ∈ W is a scalar weight for the i-th input channel, j-th output channel, k-th nearest neighbor of each point corresponding to the m-th value and coordinate features for each input point, and m -th value and coordinate features for output points. Similar to the convolution operator, we define b j as a bias for the j-th output map. In the above, h and h are the h -th value features of the input/output point set. Likewise, l and l are the l -th coordinate features of the input/output. σ(·) is the sigmoid function, which limits the range of predicted coordinates to, to avoid outliers. Before feeding them to the model, the coordinates of raw point-clouds are normalized to by ς = (ς − ς min)/(ς max − ς min), on each dimension. This improves the transformation robustness of the operator. The K nearest points can vary for each channel at each location, because the channels in the pointcloud dataset may represent different types of measurements. For example, channels in the mobile traffic dataset are related to the traffic consumption of different mobile apps, while those in the air quality dataset are different air quality indicators (SO 2, CO, etc.). The spatial correlations will vary between different measurements (channels), due to human mobility. For instance, more people may use Facebook at a social event, but YouTube traffic may be less significant in this case. This will be reflected by the data consumption of each app. The same applies to air quality indicators affected by vehicle movement and factory working times. We want these spatial correlations to be learnable, so we do not fix the K nearest neighbors across channels, but encourage each channel to find the best neighbor set. This is also a contribution of the CloudLSTM, which helps improve the forecasting performance. We provide a graphical illustration of DConv in Fig. 2. For each point p n, the DConv operator weights its K nearest neighbors across all features, to produce the values and coordinates in the next layer. Since the permutation of the input neither affects the neighboring information nor the ranking of their distances for any Q K n, DConv is a symmetric function whose output does not depend on the input order. This means that the property (i) discussed in Sec. 3.1 is satisfied. Further, DConv is performed on every point in set S i in and produces exactly the same number of features and points for its output; property (ii) is therefore naturally fulfilled. In addition, operating over a neighboring point set, irrespective of its layout, allows to capture local dependencies and improve the robustness to global transformations (e.g., shifting and scaling). The normalization over the coordinate features further improves the robustness to those transformations, as shown by the proof in Appendix C. This enables to meet the desired properties (iii) and (iv). More importantly, DConv learns the layout and topology of the cloud-point for the next layer, which changes the neighboring set Q K n for each point at output S j out. This enables the "location-variance" (property (v)), allowing the model to perform dynamic positioning tailored to each channel and time step. This is essential in spatiotemporal forecasting neural models, as spatial correlations change over time . DConv can be efficiently implemented using simple 2D convolution, by reshaping the input map and weight tensor, which can be parallelized easily in existing deep learning frameworks. We detail this in Appendix A and provide a complexity analysis of the DConv operator in Appendix B. Relations with PointCNN and Deformable Convolution . The DConv operator builds upon the PointCNN and deformable convolution neural network (DefCNN) on grids , but introduces several variations tailored to pointcloud structural data. PointCNN employs the X -transformation over point clouds, to learn the weight and permutation on a local point set using multilayer perceptrons (MLPs), which introduces extra complexity. This operator guarantees the order invariance property, but leads to information loss, since it performs aggregation over points. In our DConv operator, the permutation is maintained by aligning the weight of the ranking of distances between point p n and Q K n. Since the distance ranking is unrelated to the order of the inputs, the order invariance is ensured in a parameter-free manner without extra complexity and loss of information. Further, the DConv operator can be viewed as the DefCNN over point-clouds, with the differences that (i) DefCNN deforms weighted filters, while DConv deforms the input maps; and (ii) DefCNN employs bilinear interpolation over input maps with a set of continuous offsets, while DConv instead selects K neighboring points for its operations. Both DefCNN and DConv have transformation modeling flexibility, allowing adaptive receptive fields on convolution. The DConv operator can be plugged straightforwardly into LSTMs, to learn both spatial and temporal correlations over point-clouds. We formulate the Convolutional Point-cloud LSTM (CloudLSTM) as: Similar to ConvLSTM , i t, f t, and o t, are input, forget, and output gates respectively. C t denotes the memory cell and H t is the hidden states. Note that i t, f t, o t, C t, and H t are all point cloud representations. W and b represent learnable weight and bias tensors. In Eq. 3,'' denotes the element-wise product,'' is the DConv operator formalized in Eq. 2, and' *' a simplified DConv that removes the sigmoid function in Eq. 2. The latter only operates over the gates computation, as the sigmoid functions are already involved in outer calculations (first, second, and fourth expressions in Eq. 3). We show the structure of a basic CloudLSTM cell in the left part of Fig. 3. We combine our CloudLSTM with Seq2seq learning and the soft attention mechanism , to perform forecasting, given that these neural models have been proven to be effective in spatiotemporal modelling on grid-structural data (e.g., . We show the overall Seq2seq CloudLSTM in the right part of Fig. 3. The architecture incorporates an encoder and a decoder, which are different stacks of CloudLSTMs. The encoder encodes the historical information into a tensor, while the decoder decodes the tensor into predictions. The states of the encoder and decoder are connected using the soft attention mechanism via a context vector . Before feeding the point-cloud to the model and generating the final forecasting, the data is processed by Point Cloud Convolutional (CloudCNN) layers, which perform the DConv operations. Their function is similar to the word embedding layer in natural language processing tasks , which helps translate the raw point-cloud into tensors and vice versa. In this study, we employ a two-stack encoder-decoder architecture, and configure 36 channels for each CloudLSTM cell, as we found that further increasing the number of stacks and channels does not improve the performance significantly. Beyond CloudLSTM, we also explore plugging the DConv into vanilla RNN and Convolutional GRU, which leads to a new Convolutional Point-cloud RNN (CloudRNN) and Convolutional Point-cloud GRU (CloudGRU), as formulated in the Appendix E. The CloudRNN and CloudGRU share a similar Seq2seq architecture with CloudLSTM, except that they do not employ the attention mechanism. We compare their performance in the following section. To evaluate the performance of our architectures, we employ measurement datasets of traffic generated by 38 mobile services and recorded at individual network antennas, and of 6 air quality indicators collected at monitoring stations. We use the proposed CloudLSTM to forecast future mobile service demands and air quality indicators in the target regions. We provide a comprehensive comparison with 12 baseline deep learning models, over four performance metrics. All models considered in this study are implemented using the open-source Python libraries TensorFlow and TensorLayer . We train all architectures with a computing cluster with two NVIDIA Tesla K40M GPUs. We optimize all models by minimizing the mean square error (MSE) between predictions and ground truth, using the Adam optimizer . Next, we first introduce the datasets employed in this study, then discuss the baseline models used for comparison. Finally, we report on the experimental obtained. We conduct experiments on two typical spatiotemporal point-cloud stream forecasting tasks over 2D geospatial environments, with measurements collected in two different scenarios for each use case. Note that, as the data sources have fixed locations in these applications, the coordinate features will be omitted in the final output. However, in different use cases, such as crowd mobility forecasting, the coordinate features would be necessarily included. Mobile Traffic Forecasting. We experiment with large-scale multi-service datasets collected by a major operator in two large European metropolitan areas with diverse topology and size during 85 consecutive days. The data consists of the volume of traffic generated by devices associated to each of the 792 and 260 antennas in the two target cities, respectively. The antennas are non-uniformly distributed over the urban regions, thus they can be viewed as 2D point clouds over space. At each antenna, the traffic volume is expressed in Megabytes and aggregated over 5-minute intervals, which leads to 24,482 traffic snapshots. These snapshots are gathered independently for each of 38 different mobile services, selected among the most popular apps for video streaming, gaming, messaging, cloud services, and social networking. Further details about the dataset can be found in Appendix G. Air Quality Forecasting. Air quality forecasting performance is investigated using a public dataset , which comprises six air quality indicators (i.e., PM2.5, PM10, NO 2, CO, O 3 and SO 2) collected by 437 air quality monitoring stations in China, over a span of one year. The monitoring stations are partitioned into two city clusters, based on their geographic locations, and measure data on an hourly basis. The dataset includes 8,760 snapshots in total for each cluster. We conduct experiments on both clusters individually and fill missing data using linear interpolation. The reader is referred to Appendix G for details. Before feeding to the models, the measurements associated to each mobile service and air quality indicator are transformed into different input channels of the point-cloud S. All coordinate features ς are normalized to the range. In addition, for the baseline models that require grid-structural input (i.e., CNN, 3D-CNN, ConvLSTM and PredRNN++), the point-clouds are transformed into grids (a) using the Hungarian algorithm . The ratio of training plus validation, and test sets is 8:2. We compare the performance of our proposed CloudLSTM with a set of baseline models, as follows. PointCNN performs convolution over point-clouds and has been employed for point-cloud classification and segmentation. CloudCNN is an original benchmark we introduce, which stacks the proposed DConv operator over multiple layers for feature extraction from pointclouds. PointLSTM is another original benchmark, obtained by replacing the cells in ConvLSTM with the X -Conv operator employed by PointCNN, which provides a fair term of comparison for other Seq2seq architectures. Beyond these models, we also compare the CloudLSTM with two of its variations, i.e., CloudRNN and CloudGRU, which were introduced in Sec. 3.3. Other baseline models, including MLP , CNN , 3D-CNN , LSTM , ConvLSTM PredRNN++ , along with the detailed configuration of all models are discussed in Appendix E. We quantify the accuracy of the proposed CloudLSTM in terms of Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). Since the mobile traffic snapshots can be viewed as "urban images" , we also select Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) to quantify the fidelity of the forecasts and their similarity with the ground truth, as suggested by relevant recent work. Details about the metrics are discussed in Appendix F. For the mobile traffic prediction task, we employ all neural networks to forecast city-scale mobile traffic consumption over a time horizon of J = 6 sampling steps, i.e., 30 minutes, given M = 6 consecutive past measurements. For RNN-based models, i.e., LSTM, ConvLSTM, PredRNN++, CloudLSTM, CloudRNN, and CloudGRU, we then extend the number of prediction steps to J = 36, i.e., 3 hours, to evaluate their long-term performance. In the air quality forecasting use case, all models receive a half day of measurements, i.e., M = 12, as input, and forecast indicators in the following 12 hours, i.e., J = 12. As for the previous use case, the number of prediction steps are then extended to J = 72, or 3 days, for all RNN-based models. We perform 6-step forecasting for 4,888 instances across the test set, and report in Table 1 the mean and standard deviation (std) of each metric. We also investigate the effect of a different number of neighboring points (i.e., K = 3, 6, 9), as well as the influence of the attention mechanism. Observe that RNN-based architectures in general obtain superior performance, compared to CNNbased models and the MLP. In particular, our proposed CloudLSTM, and its CloudRNN, and CloudGRU variants outperform all other banchmark architectures, achieving lower MAE/RMSE and higher PSNR/SSIM on both urban scenarios. This suggests that the DConv operator learns features over geospatial point-clouds more effectively than vanilla convolution and PointCNN. Among our approaches, CloudLSTM performs better than CloudGRU, which in turn outperforms CloudRNN. Interestingly, the forecasting performance of the CloudLSTM seems fairly insensitive to the number of neighbors (K); it is therefore worth using a small K in practice, to reduce model complexity. Further, we observe that the attention mechanism improves the forecasting performance, as it helps capturing better dependencies between input sequences and vectors in decoders. This effect has also been confirmed by other NLP tasks. We provide a complete service-wise evaluation in Appendix H. Long-term Forecasting Performance. We extend the prediction horizon to up to J = 36 time steps (i.e., 3 hours) for all RNN-based architectures, and show their MAE evolution with respect to this horizon in Fig. 4. Note that the input length remains unchanged, i.e., 6 time steps. In city 1, observe that the MAE does not grow significantly with the prediction step for most models, as the curves flatten. This means that these models are reliable in terms of long-term forecasting. As for city 2, we note that low K may lead to poorer long term performance for CloudLSTM, though not significant before step 20. This provides a guideline on choosing K for different forecast lengths required. We employ all models to deliver 12-step air quality forecasting on six indicators, given 12 snapshots as input. Results are in Table 2. Also in this use case, the proposed CloudLSTMs attain the best performance across all 4 metrics, outperforming state-of-the-art methods (ConvLSTM) by up to 12.2% and 8.8% in terms of MAE and RMSE, respectively. Unlike in the mobile traffic forecasting , a lower K yields better prediction performance, though the difference appears subtle. Again, the CloudCNN always proves superior to the PointCNN, indicating that CloudCNNs are better feature extractors over point-clouds. Overall, these demonstrate the effectiveness of the CloudLSTM models for modeling spatiotemporal point-cloud stream data, regardless of the tasks to which they are applied. Performance evaluations of long-term forecasting, i.e., up to 72 future time steps, are conducted on RNN-based models and are presented in Appendix I. Note that we conduct our experiments using strict variable-controlling methodology, i.e., only changing one factor while keep the remaining the same. Therefore, it is easy to study the effect of each factor. For example, taking a look at the performance of LSTM, ConvLSTM, PredRNN++, PointLSTM and CloudLSTM, which employ dense layers, and CNN, PointCNN and D-Conv as core operators but using LSTM as the RNN structure, it is clear that the D-Conv contributes significantly to the performance improvements. Further, by comparing CloudRNN, CloudGRU and CloudLSTM, it appears that CloudRNN CloudGRU < CloudLSTM. Similarly, by comparing the CloudLSTM and Attention CloudLSTM, we see that the effects of the attention mechanism are not very significant. Therefore, we believe the core operator > RNN structure > attention, ranked by their contribution. We introduce CloudLSTM, a dedicated neural model for spatiotemporal forecasting tailored to pointcloud data streams. The CloudLSTM builds upon the DConv operator, which performs convolution over point-clouds to learn spatial features while maintaining permutation invariance. The DConv simultaneously predicts the values and coordinates of each point, thereby adapting to changing spatial correlations of the data at each time step. DConv is flexible, as it can be easily combined with various RNN models (i.e., RNN, GRU, and LSTM), Seq2seq learning, and attention mechanisms. The DConv can be efficiently implemented using a standard 2D convolution operator, by data shape transformation. We assume a batch size of 1 for simplicity. Recall that the input and output of DConv, S in and S out, are 3D tensors with shape (N, (H + L), U in ) and (N, (H + L), U out ), respectively. Note that for each p n in S i in, we find the set of top K nearest neighbors Q K n. Combining these, we transform the input into a 4D tensor S i in, with shape (N, K, (H + L), U in ). To perform DConv over S i in, we split the operator into the following steps: Algorithm 1 Efficient algorithm for DConv implementation using the 2D convolution operator 1: Inputs: The weight tensor W. in, W) with step 1 without padding. S out becomes a 3D tensor with shape (N, 1, U out × (H + L)) 6: Reshape the output map S out to (N, (H + L), U out ) 7: Apply the sigmoid function σ(·) to the coordinates feature in S out This enables to translate the DConv into a standard convolution operation, which is highly optimized by existing deep learning frameworks. We study the complexity of DConv by separating the operation into two steps: (i) finding the neighboring set Q K n for each point p n ∈ S, and (ii) performing the weighting computation in Eq. 2. We discuss the complexity of each step separately. For simplicity and without loss of generality, we assume the number of input and output channels are both 1. For step (i), the complexity of finding K nearest neighbors for one point is close to O(K · L log N), 1 if using KD trees . For step (ii), it is easy to see from Eq. 2 that the complexity of computing one feature of the output p n is O((H + L) · K). Since each point has (H + L) features and the output point set S j out has N points, the overall complexity of step 2 ). This is equivalent to the complexity of a vanilla convolution operator, where both the input and output have (H + L) channels, and the input map and kernel have N and K elements, respectively. This implies that, compared to the convolution operator whose inputs, outputs, and filters have the same size, DConv introduces extra complexity by searching the K nearest neighbors for each point O(K · L log N). Such complexity does not increase much even with higher dimensional point clouds. We show that the normalization of the coordinates features enables transformation invariance with shifting and scaling. The shifting and scaling of a point can be represented as: where A and B are a positive scaling coefficient and respectively an offset. By normalizing the coordinates, we have: This implies that, by using normalization, the model is invariant to shifting and scaling transformations. We combine our proposed CloudLSTM with the attention mechanism introduced in . We denote the j-th and i-th states of the encoder and decoder as H j en and H i de. The context tensor for state i at the encoder can be represented as: where e i,j is a score function, which can be selected among many alternatives. In this paper, we choose e i,j = v We compared our proposal against a set of baselines models. MLP , CNN , and 3D-CNN are frequently used as benchmarks in mobile traffic forecasting . DefCNN learns the shape of the convolutional filters and has similarities with the DConv operator proposed in this study . LSTM is an advanced RNN frequently employed for time series forecasting . While ConvLSTM can be viewed as a baseline model for spatiotemporal predictive learning, the PredRNN++ is the state-of-the-art architecture for spatiotemporal forecasting on grid-structural data and achieves the best performance in many applications . The CloudRNN and CloudGRU have can be formulated as: CloudGRU: The CloudRNN and CloudGRU share a similar Seq2seq architecture with CloudLSTM, except that they do not employ the attention mechanism. We show in Table 3 the detailed configuration along with the number of parameters for each model considered in this study. Note that we used 2 layers (same as our CloudLSTMs) for ConvLSTM, PredRNN++ and PointLSTM, since we found that increasing the number of layers did not improve their performance. 3 × 3 filters are commonly used in image applications, where they have been proven effective. This yields a receptive field of 9 (3 × 3), which is equivalent to K = 9 in our CloudLSTMs. Thus this supports a fair comparison. In addition, the PredRNN++ follows a slightly different structure than that of other Seq2seq models, as specified in the original paper. 2-stack Seq2seq CloudLSTM, with 36 channels and K = 3 CloudLSTM (K = 6) 2-stack Seq2seq CloudLSTM, with 36 channels and K = 6 CloudLSTM (K = 9) 2-stack Seq2seq CloudLSTM, with 36 channels and K = 9 Attention CloudLSTM 2-stack Seq2seq CloudLSTM, with 36 channels, K = 9 and soft attention mechanism We optimize all architectures using the MSE loss function: Here v h n is the mobile traffic volume forecast for the h-th service, and respectively the forecast value of the h-th air quality indicator, at antenna/monitoring station n, at time t, while v h n is the corresponding ground truth. We employ MAE, RMSE, PSNR and SSIM to evaluate the performance of our models. These are defined as: PSNR(t) = 20 log v max (t) − 10 log 1 where µ v (t) and v max (t) are the average and maximum traffic recorded for all services/quality indicators, at all antennas/monitoring stations and time instants of the test set. VAR(·) and COV(·) denote the variance and covariance, respectively. Coefficients c 1 and c 2 are employed to stabilize the fraction in the presence of weak denominators. Following standard practice, we set 2, where L = 2 is the dynamic range of float type data, and k 1 = 0.1, k 2 = 0.3. City 2 Figure 5: The anonymized locations of the antenna set in both cities. The measurement data is collected via traditional flow-level deep packet inspection at the packet gateway (P-GW). Proprietary traffic classifiers are used to associate flows to specific services. Due to data protection and confidentiality constraints, we do not disclose the name of the operator, the target metropolitan regions, or the detailed operation of the classifiers. For similar reasons, we cannot name the exact mobile services studied. We show the anonymized locations of the antennas sets in both cities in Fig. 5 As a final remark on data collection, we stress that all measurements were carried out under the supervision of the competent national privacy agency and in compliance with applicable regulations. In addition, the dataset we employ for our study only provides mobile service traffic information accumulated at the antenna level, and does not contain personal information about individual subscribers. This implies that the dataset is fully anonymized and its use for our purposes does not raise privacy concerns. Due to a confidentiality agreement with the mobile traffic data owner, the raw data cannot be made public. As already mentioned, the set of services S considered in our analysis comprises 38 different services. An overview of the fraction of the total traffic consumed by each service and each category in both cities throughout the duration of the measurement campaign is in Fig. 6. The left plot confirms the power law previously observed in the demands generated by individual mobile services. Also, streaming is the dominant type of traffic, with five services ranking among the top ten. This is confirmed in the right plot, where streaming accounts for almost half of the total traffic consumption. Web, cloud, social media, and chat services also consume large fractions of the total mobile traffic, between 8% and 17%, whereas gaming only accounts for 0.5% of the demand. The air quality dataset comprises air quality information from 43 cities in China, collected by the Urban Computing Team at Microsoft Research. In total, there are 2,891,393 air quality records from 437 air quality monitoring stations, gathered over a period of one year. The stations are partitioned into two clusters, based on their geographic locations, as shown in Fig. 7. Cluster A has 274 stations, while Cluster B includes 163. Note that missing data exists in the records and gaps have been filled through linear interpolation. The dataset is available at https://www.microsoft.com/ en-us/research/project/urban-air/. We dive deeper into the performance of the proposed Attention CloudLSTMs, by evaluating the forecasting accuracy for each individual mobile service, averaged over 36 steps. To this end, we present the MAE evaluation on a service basis (left) and category basis (right) in Fig. 8. Observe that the attention CloudLSTMs obtain similar performance over both cities at the service and category level. Jointly analyzing with Fig. 6, we see that services with higher traffic volume on average (e.g., streaming and cloud) also yield higher prediction errors. This is because their traffic evolution exhibits more frequent fluctuations, which introduces higher uncertainty, making the traffic series more difficult to predict. Figure 9: MAE evolution wrt. prediction horizon achieved by RNN-based models on both city clusters for the air quality forecasting. We show the MAE for long-term forecasting (72 steps) of air quality on both city clusters in Fig. 9. Generally, the error grows with time for all models, as expected. Turning attention to the CloudLSTM with different K, though the performance of different settings appears similar at the beginning, larger K can significantly improve the robustness of the CloudLSTM, as the MAE grows much slower with time when K = 9. This is consistent with the made in the mobile traffic forecasting task. We complete the evaluation of the mobile traffic forecasting task by visualizing the hidden features of the CloudLSTM, which provide insights into the knowledge learned by the model. In Fig. 10, we show an example of the scatter distributions of the hidden state in H t of CloudLSTM and Attention CloudLSTM at both stacks, along with the first input snapshots. The first 6 columns show the H t for encoders, while the rest are for decoders. The input data snapshots are samples selected from City 2 (260 antennas/points). Recall that each H t has 1 value features and 2 coordinate features for each point, therefore each scatter subplot in Fig. 10 Step 1 Attention Cloud-LSTM Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 8 Step 9 Step 10 Step 11 Step 12 Figure 11: NO 2 forecasting examples in City Cluster A generated by all RNN-based models. stack 2, as features are extracted at a higher level, exhibiting more direct spatial correlations with respect to the output. Lastly, in Fig. 11 and 12 and we show a set of NO 2 forecasting examples in both city cluster A and B considered for air quality prediction, generated by all RNN-based models, offering a performance comparison from a purely visual perspective. Point-clouds are converted into heat maps using 2D linear interpolation. The better prediction offered by (Attention) CloudLSTMs is apparent, as our proposed architectures capture trends in the point-cloud streams and deliver high long-term visual fidelity, whereas the performance of other architectures degrade rapidly in time. Step 1 Attention Cloud-LSTM Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Step 8 Step 9 Step 10 Step 11 Step 12 Figure 12: NO 2 forecasting examples in Cluster B generated by the RNN-based models. The DConv uses Sigmoid functions to regularize the coordinate features of each point, such that those points which are far from others will move closer to each other and be more involved in the computation. Further, by stacking multiple DConv via dedicated structure (LSTM), the CloudLSTM has much stronger representability and therefore allows to refine the positions of each input point at each time step and each stack. Eventually, each point can learn to move to the position where it is best to be and therefore, our model continues to work well while forecasting with outlier points. For demonstration, we use the density-based spatial clustering of applications with noise (DBSCAN) to find those outliers (red points) in both clusters in the air quality dataset, as shown in Fig. 13. For each city cluster, the DBSCAN algorithm finds 16 outlier points, which are relatively isolated and far from the point-cloud center. We recompute the MAE and RMSE performance especially for these outlier points, as shown in Table 4. Observe that our CloudLSTM still obtains the lowest prediction error as compared to the other models considered. Taking a closer look at the CNN-based models, the CloudCNN, which employs the DConv operator, obtains the best forecasting performance relative to CNN, 3D-CNN, DefCNN and PointCNN. We dive deeper into the robustness of our CloudLSTM to outliers by conducting experiments under more controlled scenarios. To this end, we randomly selected 50 weather stations in each city cluster and construct a toy dataset. Among these weather stations, we randomly pick 10 as outliers, and move their positions away from the center by d = {0, 0.5, 1, 5} on both x and y axes. The direction of movement depends on the quadrant of each outlier. Note that the original position of each weather station is normalized to, so d = 5 means the point is moved at a distance 5 times the maximum range of its original position. The positions of the remaining 40 weather stations remain unchanged. We show the positions of each weather stations after moving by different d for both city clusters in Fig. 14 and 15. We retrain the CloudLSTM and PointLSTM under the same settings, and show the MAE and RMSE performance of each in Table. 5. Observe that the proposed CloudLSTM performs almost equally well when forecasting over inliers and outliers, irrespective of the distance to outliers. Importantly, CloudLSTM achieves significantly better performance over its counterpart PointLSTM. This further demonstrates that our model is robust to outliers, whose locations appear "lone". We further compare our proposal with simple baselines, which also perform forecasting based on knearest neighbors of each target point. To this end, we construct MLPs and LSTMs with the structures specified in Table 3, but with different input form. Specifically, for each point, the models perform prediction using only the K nearest neighbors' data, with K from {1, 3, 6, 9, 25, 50, 100}. We show their performance along with that of our CloudLSTM on the air quality dataset in Table 6. Observe that our CloudLSTM significantly outperforms MLPs and LSTMs, which conduct forecasting only relying on k-nearest neighbors. The number of neighbors K affects the receptive field of each model. A small K means the model only relies on limited local spatial dependencies, while global spatial correlations between points are neglected. In contrast, a large K enables looking around larger location spaces, while this might lead to overfitting. The in the table suggest that the K does not affect the performance of each baseline significantly. Meanwhile our proposed CloudLSTM, which extracts local spatial dependencies through DConv kernels and merges global spatial dependency via stacks of time steps and layers, is superior to these simple baselines. Finally, we notice that seasonal information exists in the mobile traffic series, which can be further exploited to improve the forecasting performance. However, directly feeding the model with data spanning multiple days is infeasible, since, e.g., a 7-day window corresponds to a 2016-long sequence as input (given that data is sampled every 5 minutes) and it is very difficult for RNN-based models to handle such long sequences. In addition, by considering the number of mobile services and antennas, the input for 7 days would have 60,673,536 data points. This would make any forecasting model extremely large and therefore impractical for real deployment. To capture seasonal information more efficiently, we concatenate the 30 minute-long sequences (sampled every 5 minutes) with a sub-sampled 7-day window (sampled every 2h). This forms an input with length 90 (6 + 84). We conduct experiments on a randomly selected subset (100 antennas) of the mobile traffic dataset (City 1), and show the forecasting performance without and with seasonal information (7-day window) in Table 7. By incorporating the seasonal information, the performance of most forecasting models is boosted. This indicates that the periodic information is learnt by the model, which helps reduce the prediction errors. However, the concatenation increases the length of the input, which also increases the model complexity. Future work will focus on a more efficient way to fuse the seasonal information, with marginal increase in complexity. | This paper introduces CloudLSTM, a new branch of recurrent neural models tailored to forecasting over data streams generated by geospatial point-cloud sources. | 665 | scitldr |
Knowledge Graphs (KG), composed of entities and relations, provide a structured representation of knowledge. For easy access to statistical approaches on relational data, multiple methods to embed a KG as components of R^d have been introduced. We propose TransINT, a novel and interpretable KG embedding method that isomorphically preserves the implication ordering among relations in the embedding space. TransINT maps set of entities (tied by a relation) to continuous sets of vectors that are inclusion-ordered isomorphically to relation implications. With a novel parameter sharing scheme, TransINT enables automatic training on missing but implied facts without rule grounding. We achieve new state-of-the-art performances with signficant margins in Link Prediction and Triple Classification on FB122 dataset, with boosted performance even on test instances that cannot be inferred by logical rules. The angles between the continuous sets embedded by TransINT provide an interpretable way to mine semantic relatedness and implication rules among relations. Recently, learning distributed vector representations of multi-relational knowledge has become an active area of research (Bordes et al.; Nickel et al.; Kazemi & Poole; Wang et al.; Bordes et al.). These methods map components of a KG (entities and relations) to elements of R d and capture statistical patterns, regarding vectors close in distance as representing similar concepts. However, they lack common sense knowledge which are essential for reasoning (Wang et al.; Guo et al.; Nickel & Kiela). For example, "parent" and "father" would be deemed similar by KG embeddings, but by common sense, "parent ⇒ father" yet not the other way around. Thus, one focus of current research is to bring common sense rules to KG embeddings (Guo et al.; Wang et al.; Wei et al.( . Some methods impose hard geometric constraints and embed asymmetric orderings of knowledge (Nickel & Kiela; Vendrov et al.; Vilnis et al.( . However, they only embed hierarchy (unary Is_a relations), and cannot embed n-ary relations in KG's. Moreover, their hierarchy learning is largely incompatible with conventional relational learning, because they put hard constraints on distance to represent partial ordering, which is a common metric of similarity/ relatedness in relational learning. We propose TransINT, a new KG embedding method that isomorphically preserves the implication ordering among relations in the embedding space. TransINT restrict entities tied by a relation to be embedded to vectors in a particular region of R d included isomorphically to the order of relation implication. For example, we map any entities tied by is_father_of to vectors in a region that is part of the region for is_parent_of; thus, we can automatically know that if John is a father of Tom, he is also his parent even if such a fact is missing in the KG. Such embeddings are constructed by sharing and rank-ordering the basis of the linear subspaces where the vectors are required to belong. Mathematically, a relation can be viewed as sets of entities tied by a constraint (Stoll). We take such a view on KG's, since it gives consistancy and interpretability to model behavior. Furthermore, for the first time in KG embedding, we map sets of entitites under relation constraint to a continuous set of points (whose elements are entity vectors) -which learns relationships among not only individual entity vectors but also sets of entities. We show that angles between embedded relation sets can identify semantic patterns and implication rules -an extension of the line of thought as in word/ image embedding methods such as Mikolov et al., Frome et al. to relational embedding. Such mining is both limited and less interpretable if embedded sets are discrete (Vilnis et al.; Vendrov et al.) or each entitity itself is embedded to a region, not a member vector of it (Vilnis et al.). 1 TransINT's such interpretable meta-learning opens up possibilities for explainable reasoning in applications such as recommender systems (Ma et al.) and question answering (Hamilton et al. In this section, we describe the intuition and justification of our method. We first define relation as sets, and revisit TransH as mapping relations to sets in R d . Finally, we propose TransINT, which connects the ordering of the two aforementioned sets. We put * next to definitions and theorems we propose/ introduce. Otherwise, we use existing definitions and cite them. We define relations as sets and implication as inclusion of sets, as in set-theoretic logic. Definition (Relation Set): Let r i be a binary relation x, y entities. Then, r i (x, y) iff there exists some set R i such that the pair (x, y) ∈ R i. R i is called the relation set of r i. (Stoll) For example, consider the distinct relations in Figure 1a, and their corresponding sets in Figure 1b; Is_Father_Of(Tom, Harry) is equivalent to (Tom, Harry) ∈ R Is_Father_Of. Definition (Logical Implication): For two relations, r 1 implies r 2 (or r 1 ⇒ r 2) iff ∀x, y,; the orange dot is the origin, to emphasize that a vector is really a point from the origin but can be translated and considered equivalently. (a): first projecting #» h and #» t onto H is_f amily_of, and then requiring: first substracting #» t from #» h, and then projecting the distance (# » t − h) to H is_f amily_of and requiring (# » t − h) ⊥ ≈ rj. The red line is unique because it is when # » r is_f amily_of is translated to the origin. 2.2 : TRANSE AND TRANSH Given a fact triple (h, r, t) in a given KG (i.e. (Harry, is_father_of, Tom)), TransE wants #» h + #» r ≈ #» t where #» h, #» r, #» t are embeddings of h, r, t. In other words, the distance between two entity vectors is equal to a fixed relation vector. TransE applies well to 1-to-1 relations but has issues for N-to-1, 1-to-N and N-to-N relations, because the distance between two vectors are unique and thus two entities can only be tied with one relation. To address this, TransH constraints the distance of entities in a multi-relational way, by decomposing distance with projection (Figure 2a). TransH first projects an entity vector into a hyperplane unique to each relation, and then requires their difference is some constant value. Like TransE, it embeds an entity to a vector. However, for each relation r j, it assigns two components: a relation-specific hyperplane H j and a fixed vector #» r j on H j. For each fact triple (h, r j, t), TransH wants (Figure 2) where (Figure 2a). Revisiting TransH We interpret TransH in a novel perspective. An equivalent way to put Eq.1 is to change the order of subtraction and projection: This means that all entity vectors (#» h, #» t) such that their distance # » t − h belongs to the red line are considered to be tied by relation r j (Figure 2b) i.e. R j ≈ the red line. For example, The red line is the set of all vectors whose projection onto H j is the fixed vector #» r j. Thus, upon a deeper look, TransH actually embeds a relation set in KG (figure 1b) to a particular set in R d. We call such sets relation space for now; in other words, a relation space of some relation r i is the space where each (h, r i, t)'s # » t − h can exist. We formally visit it later in Section 3.1.. (Figure 2b) perspective. The blue line, red line, and the green plane is respectively is_father_of, is_mother_of and is_parent_of's relation space -where # » t − h's of h, t tied by these relations can exist. The blue and the red line lie on the green plane -is_parent_of's relation space includes the other two's. 2.3 TRANSINT Like TransH, TransINT embeds a relation r j to a (subspace, vector) pair (H j, #» r j). However, TransINT modifies the relation embeddings (H j, #» r j) so that the relation spaces (i.e. red line of Figure 2b) are ordered by implication; we do so by intersecting the H j's and projecting the #» r j's (Figure 3a). We explain with familial relations as a running example. Intersecting the H j's TransINT assigns distinct hyperplanes H is_f ather_of and H is_mother_of to is_father_of and is_mother_of. However, because is_parent_of is implied by the aforementioned relations, we assign H is_parent_of = H is_f ather_of ∩ H is_mother_of. TrainsINT's H is_parent_of is not a hyperplane but a line (Figure 3a), unlike in TransH where all H j's are hyperplanes. Projecting the #» r j's TransH constrains the #» r j's with projections (Figure 3a 's dotted orange lines). First, # » r is_f ather_of and # » r is_mother_of are required to have the same projection onto H is_parent_of. Second, # » r is_parent_of is that same projection onto H is_parent_of. We connect the two above constraints to ordering relation spaces. Figure 3b graphically illustrates that is_parent_of's relation space (green hyperplane) includes those of is_father_of (blue line) and is_mother_of (red line). More generally, TransINT requires two hard geometric constraints on (H j, #» r j)'s that For distinct relations r i, r j, require the following if and only if r i ⇒ r j: Intersection Constraint: We prove that these two constraints guarantee that an ordering isomorphic to implication holds in the embedding space: (r i ⇒ r j) iff (r i 's rel. space ⊂ r j 's rel. space) or equivalently, The orderings are isomorphic, because for example, if is_parent_of subsumes is_father_of, the first relation space also subsumes the latter's (Figure 3). At first sight, it may look paradoxical that the H j's and the relation spaces are inversely ordered; however, it is a natural consequence of the rank-based geometry in R d. In this section, we formally state TransINT's isomorphic guarantee and its grounds. We also discuss the intuitive meaning of our method. We denote all d × d matrices with capital letters (ex) A) and vectors with arrows on top (ex) #» b ). In R d, points are projected to linear subspaces by projection matrices; each linear subspace H i has a projection matrix Strang). For example, in Figure 4, a random point #» a ∈ R d is projected onto H 1 when multiplied by P 1; i.e. P 1 a = #» b ∈ H 1. In the rest of the paper, denote P i as the projection matrix onto subspace H i. Now, we algebraically define a general concept that subsumes relation space(Figure 3b). Let H be a linear subspace and P its projection matrix. Then, given #» k on H, the set of vectors that become #» k when projected on to H, or the solution space of With this definition, relation space (Figure 3b) is (Sol(P i, #» r i)), where P i is the projection matrix of H i (subspace for relation r i); it is the set of points Main Theorem 1 (Isomorphism): Let {(H i, #» r i)} n be the (subspace, vector) embeddings assigned to relations {R i} n by the Intersection Constraint and the Projection Constraint; P i the projection matrix of In actual implementation and training, TransINT requires something less strict than for some non-negative and small. This bounds # » t − h − #» r i to regions with thickness 2, centered around Sol(P i, #» r i) (Figure 5). We prove that isomorphism still holds with this weaker requirement. Main Theorem 2 (Margin-aware Isomorphism): For all non-negative scalar, ({Sol (P i, #» r i)} n, ⊂) is isomorphic to ({R i} n, ⊂). At a first glance, it may look paradoxical that a relation whose H i is the intersection of other relations' H j's (i.e. is_parent_of of Figure 3a) actually subsumes all the relations that were intersected (i.e. is_fahter_of, is_mohter_of). This inverse ordering of H j's and Sol(P j, #» r j) arise from the fact that the two are orthocomplements (Strang). Geometrically, projection is decomposition into independent directions; #» x = P #» x + (I − P) #» x holds for all #» x. In Fig. 4a, one can see that P and I − P are orthogonal. Algebraically, a vector #» x ∈ R d bound by P #» x = b, composed of k independent constraints (rank k), #» x is free in all other d − k directions of I − P (Fig. 4b). Thus, the lesser constraint the space to be projected onto, the more freedom a vector is given; which is isomorphic to that, for example, is_f amily_of puts more freedom on who can be tied by it than is_f ather_of. (Fig. 1b). Thus, the intuitive meaning of the above proof is that we can map degree of freedom in the logical space to that in R d. The intersection and projection constraints can be imposed with parameter sharing. We describe how shared parameters are initialized and trained. From initialization, we bind parameters so that they satisfy the two constraints. For each entity e j, we assign a d-dimensional vector #» e j. To each R i, we assign (H i, #» r i) (or (A i, #» r i)) with parameter sharing. We first construct the H's. Intersection constraint We define the H's top-down, first defining the intersections and then the subspaces that go through it. To the head R h, assign a h linearly independent rows for the basis of H h. Then, to each R i that is not a head, additionally assign a i rows linearly independent to the bases of all of its parents, and construct H i with its bases and the bases of all of its parents. Projection matrices can be uniquely constructed given the bases (Strang). Now, we initlialize the #» r i's. Projection Constraint To the head R h, pick any random x h ∈ R d and assign #» r h = P h x. To each non-head R i whose parent is R p, assign Parameters to be trained Such initialization leaves the following parameters given a KG with entities e j's and relations r i's: A h for the head relation, c i for each non-head relation, #» x i for each head and non-head relation, #» e j for each entity e j. 4.1.1 TRAINING We construct negative examples (wrong fact triplets) and train with a margin-based loss, following the same protocols as in TransE and TransH. Training Objective We adopt the same loss function as in TransH. For each fact triplet (h, r i, t), we define the score function f (h, r i, t) = ||P i (# » t − h) − #» r i || 2 and train a margin-based loss L which is aggregates f's and discriminates between correct and negative examples. where G is the set of all triples in the KG and (h, r i, t) is a negative triple made from corrupting (h, r i, t). We minimize this objective with stochastic gradient descent. Automatic Grounding of Positive Triples The parameter sharing scheme guarantees two advantages during all steps of training. First, the intersection and projection constraint are met not only at initialization but always. Second, traversing through a particular (h, r i, t) also automatically executes training with (h, r p, t) for any r i ⇒ r p. For example, by traversing (Tom, is_father_of, Harry) in the KG, the model automatically also traverses (Tom, is_parent_of, Harry), (Tom, is_family_of, Harry), even if the two triples are missing in the KG. This is because P p P i = P p with the given initialization (section 4.1.1) and thus, = f (h, r i, t) In other words, training f (h, r i, t) towards less than automatically guarantees, or has the effect of training f (h, r p, t) towards less than. This enables the model to be automatically trained with what exists in the KG, eliminating the need to manually create missing triples that are true by implication rule. We evaluate TransINT on Freebase 122 (respectively created by Vendrov et al. and Guo et al.) against the current state-of-the-art method. The task is to predict the gold entity given a fact triple with missing head or tail -if (h, r, t) is a fact triple in the test set, predict h given (r, t) or predict t given (h, r). We follow TransE and KALE's protocol. For each test triple (h, r, t), we rank the similarity score (f (e, r, t) when h is replaced with e for every entity e in the KG, and identify the rank of the gold head entity h; we do the same for the tail entity t. Aggregated over all test triples, we report: (i) the mean reciprocal rank (MRR), (ii) the median of the ranks (MED), and (iii) the proportion of ranks no larger than n (HITS@N), which are the same metrics reported by KALE. A lower MED and higher MRR and Hits HITS@N are better. TransH and KALE adopt a "filtered" setting that addresses when entities that are correct, albeit not gold, are ranked before the gold entity. For example, if the gold entity is (Tom, is_parent_of, John) and we rank every entity e for being the head of (?, is_parent_of, John), it is possible that Sue, John's mother, gets ranked before Tom. To avoid this, the "filtered setting" ignore corrupted triplets that exist in the KG when counting the rank of the gold entity. (The setting without this is called the "raw setting"). We compare our performance with that of KALE and previous methods (TransE, TransH, TransR) that were compared against it, using the same dataset (FB122). FB122 is a subset of FB15K (Bordes et al.) accompanied by 47 implication and transitive rules; it consists of 122 Freebase relations on "people", "location", and "sports" topics. Since we use the same train/ test/ validation sets, we directly copy from Guo et al. for reporting on these baselines. 5.1.1 DETAILS OF TRAINING TransINT's hyperparameters are: learning rate (η), margin (γ), embedding dimension (d), and learning rate decay (α), applied every 10 epochs to the learning rate. We find optimal configurations among the following candidates: η ∈ {0.003, 0.005, 0.01}, γ ∈ {1, 2, 5, 10}, d ∈ {50, 100}, α, ∈ {1.0, 0.98, 0.95}. We create 100 mini-batches of the training set and train for maximum of 1000 epochs with early stopping based on the best median rank. Furthermore, we try training with and without normalizing each of entity vectors, relation vectors, and relation subspace bases after every batch of training. 5.1.2 EXPERIMENT SETTINGS Out of the 47 rules in FB122, 9 are transitive rules (such as person/nationality(x,y) ∧ country/official_language(y,z) ⇒ person/languages(x,z)) to be used for KALE. However, since TransINT only deals with implication rules, we do not take advantage of them, unlike KALE. We also put us on some intentional disadvantages against KALE to assess TransINT's robustness to absence of negative example grounding. In constructing negative examples for the margin-based loss L, KALE both uses rules (by grounding) and their own scoring scheme to avoid false negatives. While grounding with FB122 is not a burdensome task, it known to be very inefficient and difficult for extremely large datasets (Ding et al.). Thus, it is a great advantage for a KG model to perform well without grounding of training/ test data. We evaluate TransINT on two settings for avoiding false negative examples; using rule grounding and only avoiding ones that exist in the KG. We call them respectively TransINT G (grounding), TransINT N G (no grounding). Table 1. While the filtered setting gives better performance (as expected), the trend is generally similar between raw and filtered. TransINT outperforms all other models by large margins in all metrics, even without grounding; especially in the filtered setting, the Hits@N gap between TransINT G and KALE is around 4∼6 times that between KALE and the best performing Trans Baseline (TransR). Also, while TransINT G performs higher than TransINT N G in all settings/metrics, the gap between them is much smaller than the that between TransINT N G and KALE, showing that TransINT robustly brings state-of-the-art performance even without grounding. The suggest two possibilities in a more general sense. First, the emphasis of true positives could be as important as/ more important than avoiding false negatives. Even without manual grounding, TransINT N G has automatic grounding of positive training instances enabled (Section 4.1.1.) due to model properties, and this could be one of its success factors. Second, hard constraint on parameter structures can bring performance boost uncomparable to that by regularization or joint learning, which are softer constraints. We also note that norm regularization of any parameter did not help in training TransINT, unlike stated in TransE, TransH, and KALE. Instead, it was important to use a large margin (either γ = 5 or γ = 10). The task is to classify whether an unobserved instance (h, r, t) is correct or not, where the test set consists of positive and negative instances. We use the same protocol and test set provided by KALE; for each test instance, we evaluate its similarity score f (h, r, t) and classify it as "correct" if f (h, r, t) is below a certain threshold (σ), a hyperparameter to be additionally tuned for this task. We report on mean average precision (MAP), the mean of classification precision over all distinct relations (r's) of the test instances. We use the same experiment settings/ training details as in Link Prediction other than additionally finding optimal σ. 5.2.1 Triple Classification are shown in Table 2. Again, TransINT G and TransINT N G both significantly outperform all other baselines. We also separately analyze MAP for relations that are/ are not affected by the implication rules (those that appear/ do not appear in the rules), shown in parentheses of Table 2 with the order of (influenced relations/ uninfluenced relations). We can see that both TransINT's have MAP higher than the overall MAP of KALE, even when the TransINT's have the penalty of being evaluated only on uninfluenced relations; this shows that TransINT generates better embeddings even for those not affected by rules. Furthermore, we comment on the role of negative example grounding; we can see that grounding does not help performance on unaffected relations (i.e. 0.752 vs 0.761), but greatly boosts performance on those affected by rules (0.839 vs 0.709). While TransINT does not necessitate negative example grounding, it does improve the quality of embeddings for those affected by rules. Traditional embedding methods that map an object (i.e. words, images) to a singleton vector learn soft tendencies between embedded vectors, such as semantic similarity (Mikolov et al., Frome et al.). 83.5 n/a 7 A common metric for such tendency is cosine similarity, or angle between two embddings. TransINT extends such line of thought to semantic relatedness between groups of objects, with angles between relation spaces. In Fig. 5b, one can observe that the closer the angle between two embedded regions, the larger the overlap in area. For entities h and t to be tied by both relations r 1, r 2, t − h has to belong to the intersection of their relation spaces. Thus, we hypothesize the following over any two relations r 1, r 2 that are not explicitly tied by the pre-determined rules: Let V 1 be the set of # » t − h's in r 1's relation space (denoted as Rel 1) and V 2 that of r 2' s. Then, Angle between Rel 1 and Rel 2 represents semantic "disjointness" of r 1, r 2; the more disjoint two relations, the closer their angle to 90 •. When the angle between Rel 1 and Rel 2 is small, if majority of V 1 belongs to the overlap of V 1 and V 2 but not vice versa, r 1 implies r 2. if majority of V 1 and V 2 both belong to their overlap, r 1 and r 2 are semantically related. Hypotheses and consider the imbalance of membership in overlapped regions. Exact calculation of this involves specifying an appropriate (Fig. 3). As a proxy for deciding whether an element of V 1 (denote v 1) belongs in the overlapped region, we can consider the distance between v 1 to and its projection to Rel 2; the further away v 1 is from the overlapped region, the larger the projected distance (visualization available in our code repository). We call the mean of such distances from V 1 to For hypothesis, we verified that the vast majority of relation pairs have angles near to 90 •, with the mean and median respectively 83.0 • and 85.4 •; only 1% of all relation pairs had angles less than 50 •. We observed that relation pairs with angle less than 20 • were those that can be inferred by transitively applying the pre-determined implication rules. Relation pairs with angles within the range of [20 •] had strong tendencies of semantic relatedness or implication; such tendency drastically weakened past 70 •. Table 3 shows the angle and imb of relations with respect to /people/person/place_of_birth, whose trend agrees with our hypotheses. While we only show a subset of the complete list, we note that almost all relation pairs generally follow such a tendency; the complete list can be accessed in our code repository. Finally, we note that such analysis could be possible with TransH as well, since their method too maps # » t − h's to lines (Fig. 2b). Throughout target tasks (Link Prediction, Triple Classification) and semantics mining, TransINT's theme of optimal regions to bound entity sets is unified and consistent. Furthermore, the integration of rules into embedding space geometrically coherent with KG embeddings alone. These two qualities were missing in existing works such as TransE or KALE, and TransINT opens up new possibilities for applying KG embeddings to explainable reasoning in applications such as recommender systems (Ma et al.) and question answering (Hamilton et al.). Our work is related to three strands of work. The first strand is Order Embeddings (Vendrov et al.) and their extensions (Vilnis et al.; Athiwaratkun & Wilson), whose limitation we discussed in the introduction. While Nickel & Kiela also approximately embed unary partial ordering, their focus is on achieving reasonably competent with unsupervised learning of rules in low dimensions, while ours is achieving state-of-the-art in a supervised setting. The second strand is those that enforce the satisfaction of common sense logical rules in the embedded KG. Wang et al. explicitly constraints the ing embedding to satisfy logical implications and type constraints via linear programming, but it only requires to do so during inference, not learning. On the other hand,Guo et al. induces that embeddings follow a set of logical rules during learning, but their approach is soft induction not hardly constrain. Our work combines the advantages of both works. We presented TransINT, a new KG embedding method that embed sets of entities (tied by relations) to continuous sets in R d that are inclusion-ordered isomorphically to relation implications. Our method achieved new state-of-the-art performances with signficant margins in Link Prediction and Triple Classification on the FB122 dataset, with boosted performance even on test instances that are not affected by rules. We further propose and interpretable criterion for mining semantic similairty among sets of entities with TransINT. Here, we provide the proofs for Main Theorems 1 and 2. We also explain some concepts necessary in explaining the proofs. We put * next to definitions and theorems we propose/ introduce. Otherwise, we use existing definitions and cite them. We explain in detail elements of R d that were intuitively discussed. In this and later sections, we mark all lemmas and definitions that we newly introduce with *; those not marked with * are accompanied by reference for proof. We denote all d × d matrices with capital letters (ex) A) and vectors with arrows on top (ex) #» b ). The linear subspace given by that are solutions to the equation; its rank is the number of constraints A(x − #» b) = 0 imposes. For example, in R 3, a hyperplane is a set of #» x = [x 1, x 2, x 3] ∈ R 3 such that ax 1 + bx 2 + cx 3 − d = 0 for some scalars a, b, c, d; because vectors are bound by one equation (or its "A" only really contains one effective equation), a hyperplane's rank is 1 (equivalently rank(A) = 1). On the other hand, a line in R 3 imposes to 2 constraints, and its rank is 2 (equivalently rank(A) = 2). Consider two linear subspaces H 1, H 2, each given by A 1 (#» x − #» b 1) = 0, A 2 (#» x − #» b 2) = 0. Then, by definition. In the rest of the paper, denote H i as the linear subspace given by some A i (#» x − #» b i) = 0. Invariance For all #» x on H, projecting #» x onto H is still #» x; the converse is also true. Lemma 1 P #» x = #» x ⇔ #» x ∈ H (Strang). Orthogonality Projection decomposes any vector #» x to two orthogonal components -P #» x and (I − P) #» x (Figure 4). Thus, for any projection matrix P, I − P is also a projection matrix that is orthogonal to P (i.e. P (I − P) = 0) (Strang). Lemma 2 Let P be a projection matrix. Then I −P is also a projection matrix such that P (I −P) = 0 (Strang). The following lemma also follows. Lemma 3 ||P #» x || ≤ ||P #» x + (I − P) #» x || = || #» x || (Strang). Projection onto an included space If one subspace H 1 includes H 2, the order of projecting a point onto them does not matter. For example, in Figure 3, a random point #» a in R 3 can be first projected onto H 1 at #» b, and then onto H 3 at #» d. On the other hand, it can be first projected onto H 3 at #» d, and then onto H 1 at still #» d. Thus, the order of applying projections onto spaces that includes one another does not matter. If we generalize, we obtain the following two lemmas (Figure 6): Lemma 4 * Every two subspaces H 1 ⊂ H 2 if and only if P 1 P 2 = P 2 P 1 = P 1. proof) By Lemma 1, if H 1 ⊂ H 2, then P 2 #» x = #» x ∀ #» x ∈ H 1. On the other hand, if H 1 ⊂ H 2, then there is some #» x ∈ H 1, #» x ∈ H 2 such that P 2 #» x = #» x. Thus, ⇔ ∀ #» y, P 2 (P 1 #» y) = P 1 #» y ⇔ P 2 P 1 = P 1. Because projection matrices are symmetric (Strang), P 2 P 1 = P 1 = P 1 T = P 1 T P 2 T = P 1 P 2. Lemma 5 * For two subspaces H 1, H 2 and vector #» k ∈ H 2, | We propose TransINT, a novel and interpretable KG embedding method that isomorphically preserves the implication ordering among relations in the embedding space in an explainable, robust, and geometrically coherent way. | 666 | scitldr |
Unsupervised domain adaptive object detection aims to learn a robust detector on the domain shift circumstance, where the training (source) domain is label-rich with bounding box annotations, while the testing (target) domain is label-agnostic and the feature distributions between training and testing domains are dissimilar or even totally different. In this paper, we propose a gradient detach based Stacked Complementary Losses (SCL) method that uses detection objective (cross entropy and smooth l1 regression) as the primary objective, and cuts in several auxiliary losses in different network stages to utilize information from the complement data (target images) that can be effective in adapting model parameters to both source and target domains. A gradient detach operation is applied between detection and context sub-networks during training to force networks to learn discriminative representations. We argue that the conventional training with primary objective mainly leverages the information from the source-domain for maximizing likelihood and ignores the complement data in shallow layers of networks, which leads to an insufficient integration within different domains. Thus, our proposed method is a more syncretic adaptation learning process. We conduct comprehensive experiments on seven datasets, the demonstrate that our method performs favorably better than the state-of-the-art methods by a large margin. For instance, from Cityscapes to FoggyCityscapes, we achieve 37.9% mAP, outperforming the previous art Strong-Weak by 3.6%. In real world scenarios, generic object detection always faces severe challenges from variations in viewpoint, , object appearance, illumination, occlusion conditions, scene change, etc. These unavoidable factors make object detection in domain-shift circumstance becoming a challenging and new rising research topic in the recent years. Also, domain change is a widely-recognized, intractable problem that urgently needs to break through in reality of detection tasks, like video surveillance, autonomous driving, etc. (see Figure 2). Revisiting Domain-Shift Object Detection. Common approaches for tackling domain-shift object detection are mainly in two directions: (i) training supervised model then fine-tuning on the target domain; or (ii) unsupervised cross-domain representation learning. The former requires additional instance-level annotations on target data, which is fairly laborious, expensive and time-consuming. So most approaches focus on the latter one but still have some challenges. The first challenge is that the representations of source and target domain data should be embedded into a common space for matching the object, such as the hidden feature space , input space ) or both of them (b). The second is that a feature alignment/matching operation or mechanism for source/target domains should be further defined, such as subspace alignment , H-divergence and adversarial learning , MRL (b), Strong-Weak alignment , etc. In general, our SCL is also a learning-based alignment method across domains with an end-to-end framework. (a) Non-adapted (b) CVPR'18 (c) CVPR'19 (d) SCL (Ours) (e) Non-adapted (f) CVPR'18 (g) CVPR'19 (h) SCL (Ours) Figure 1: Visualization of features from PASCAL to Clipart (first row) and from Cityscapes to FoggyCityscapes (second row) by t-SNE . Red indicates the source examples and blue is the target one. If source and target features locate in the same position, it is shown as light blue. All models are re-trained with a unified setting to ensure fair comparisons. It can be observed that our feature embedding are consistently much better than previous approaches on either dissimilar domains (PASCAL and Clipart) or similar domains (Cityscapes and FoggyCityscapes). Our Key Ideas. The goal of this paper is to introduce a simple design that is specific to convolutional neural network optimization and improves its training on tasks that adapt on discrepant domains. Unsupervised domain adaptation for recognition has been widely studied by a large body of previous literature (; ; ; ; ; ; ;), our method more or less draws merits from them, like aligning source and target distributions with adversarial learning (domain-invariant alignment). However, object detection is a technically different problem from classification, since we would like to focus more on the object of interests (local regions). Figure 2: Illustration of domain-shift object detection in autonomous driving scenario. Images are from INIT dataset . Some recent work has proposed to conduct alignment only on local regions so that to improve the efficiency of model learning. While this operation may cause a deficiency of critical information from context. Inspired by multi-feature/strong-weak alignment (; ;) which proposed to align corresponding local-region on shallow layers with small respective field (RF) and align imagelevel features on deep layers with large RF, we extend this idea by studying the stacked complementary objectives and their potential combinations for domain adaptive circumstance. We observe that domain adaptive object detection is supported dramatically by the deep supervision, however, the diverse supervisions should be applied in a controlled manner, including the cut-in locations, loss types, orders, updating strategy, etc., which is one of the contributions of this paper. Furthermore, our experiments show that even with the existing objectives, after elaborating the different combinations and training strategy, our method can obtain competitive . By pluging-in a new sub-network that learns the context features independently with gradient detach updating strategy in a hierarchical manner, we obtain the best on several domain adaptive object detection benchmarks. The Relation to Complement Objective Training and Deep Supervision . COL proposed to involve additional function that complements the primary objective, and updated the parameters alternately with primary and complement objectives. Specifically, cross entropy is used as the primary objective H p: where y i ∈ {0, 1} D is the label of the i-th sample in one-hot representation andŷ i ∈ D is the predicted probabilities. Th complement entropy H c is defined in COT as the average of sample-wise entropies over complement classes in a mini-batch: where H is the entropy function.ŷ c is the predicted probabilities of complement classes c. The training process is that: for each iteration of training, 1) update parameters by H p first; then 2) update parameters by H c. In contrast, we don't use the alternate strategy but update the parameters simultaneously using gradient detach strategy with primary and complement objectives. Since we aim to let the network enable to adapt on both source and target domain data and meanwhile enabling to distinguish objects from them, thus our complement objective design is quite different from COT. We will describe with details in Section 2. In essence, our method is more likely to be the deeply supervised formulation that backpropagation of error now proceeds not only from the final layer but also simultaneously from our intermediate complementary outputs. While DSN is basically proposed to alleviate "vanishing" gradient problem, here we focus on how to adopt these auxiliary losses to promote to mix two different domains through domain classifiers for detection. Interestingly, we observe that diverse objectives can lead to better generalization for network adaptation. Motivated by this, we propose Stacked Complementary Losses (SCL), a simple yet effective approach for domain-shift object detection. Our SCL is fairly easy and straight-forward to implement, but can achieve remarkable performance. We conjecture that previous approaches that focus on conducting domain alignment on high-level layers only cannot fully adapt shallow layer parameters to both source and target domains (even local alignment is applied ) which restricts the ability of model learning. Also, gradient detach is a critical part of learning with our complementary losses. We further visualize the features obtained by non-adapted model, DA , Strong-Weak and ours, features are from the last layer of backbone before feeding into the Region Proposal Network (RPN). As shown in Figure 1, it is obvious that the target features obtained by our model are more compactly matched with the source domain than any other models. Contributions. Our contributions in this paper are three-fold. • We propose an end-to-end learnable framework that adopts complementary losses for domain adaptive object detection. We study the deep supervisions in this task with a controlled manner. Our method allows information from source and target domains to be integrated seamlessly. • We propose a gradient detach learning strategy to enable complementary losses to learn a better representation and boost the performance. We also provide extensive ablation studies to empirically verify the effectiveness of each component in our framework design. • To the best of our knowledge, this is a pioneer work to investigate the influence of diverse loss functions and gradient detach for domain adaptive object detection. Thus, this work gives very good intuition and practical guidance with multi-objective learning for domain adaptive object detection. More remarkably, our method achieves the highest accuracy on several domain adaptive or cross-domain object detection benchmarks, which are new records on this task. Following the common formulation of domain adaptive object detection, we define a source domain S where annotated bound-box is available, and a target domain T where only the image can be used in training process without any labels. Our purpose is to train a robust detector that can adapt well to both source and target domain data, i.e., we aim to learn a domain-invariant feature representation that works well for detection across two different domains. As shown in Figure 3, we focus on the complement objective learning and let S = {(x i is the corresponding bounding box and category labels for sample x i}. We define a recursive function for layers k = 1, 2,..., K where we cut in complementary losses: whereΘ k is the feature map produced at layer k, F is the function to generate features at layer k and Z k is input at layer k. We formulate the complement loss of domain classifier k as follows: where k denote feature maps from source and target domains respectively. Following , we also adopt gradient reverse layer (GRL) to enable adversarial training where a GRL layer is placed between the domain classifier and the detection backbone network. During backpropagation, GRL will reverse the gradient that passes through from domain classifier to detection network. For our instance-context alignment loss L ILoss, we take the instance-level representation and context vector as inputs. The instance-level vectors are from RoI layer that each vector focuses on the representation of local object only. The context vector is from our proposed sub-network that combine hierarchical global features. We concatenate instance features with same context vector. Since context information is fairly different from objects, joint training detection and context networks will mix the critical information from each part, here we proposed a better solution that uses detach strategy to update the gradients. We will introduce it with details in the next section. Aligning instance and context representation simultaneously can help to alleviate the variances of object appearance, part deformation, object size, etc. in instance vector and illumination, scene, etc. in context vector. We define d i as the domain label of i-th training image where d i = 1 for the source and d i = 0 for the target, so the instance-context alignment loss can be further formulated as: where N s and N t denote the numbers of source and target examples. P (i,j) is the output probabilities of the instance-context domain classifier for the j-th region proposal in the i-th image. So our total SCL objective L SCL can be written as: In this section, we introduce a simple detach strategy which prevents the flow of gradients from context sub-network through the detection backbone path. We find this can help to obtain more discriminative context and we show empirical evidence (see Figure 6) that this path carries information with diversity and hence gradients from this path getting suppressed is superior for such task. As aforementioned, we define a sub-network to generate the context information from early layers of detection backbone. Intuitively, instance and context will focus on perceptually different parts of an image, so the representations from either of them should also be discrepant. However, if we train with the conventional process, the companion sub-network will be updated jointly with the detection backbone, which may lead to an indistinguishable behavior from these two parts. To this end, in this paper we propose to suppress gradients during backpropagation and force the representation of context sub-network to be dissimilar to the detection network, as shown in Algorithm 1. To our best knowledge, this may be the first work to show the effectiveness of gradient detach that can help to learn better context representation for domain adaptive object detection. Although the detach-based method has been adopted in a few work for better optimization on sequential tasks, our design and motivation are quite different from it. The details of our context sub-network architecture are illustrated in Appendix A. 3. Update detection net by detection and complementary objectives: L det +L SCL Our framework is based on the Faster RCNN , including the Region Proposal Network (RPN) and other modules. The objective of the detection loss is summarized as: where L cls is the classification loss and L reg is the bounding-box regression loss. To train the whole model using SGD, the overall objective function in the model is: where λ is the trade-off coefficient between detection loss and our complementary loss. R denotes the RPN and other modules in Faster RCNN. Following , we feed one labeled source image and one unlabeled target one in each mini-batch during training. Datasets. We evaluate our approach in three different domain shift scenarios: Similar Domains; Discrepant Domains; and From Synthetic to Real Images. All experiments are conducted on seven domain shift datasets: Cityscapes to FoggyCityscapes, Cityscapes to KITTI , KITTI to Cityscapes, INIT Dataset , PASCAL to Clipart , PASCAL to Watercolor , GTA (Sim 10K) to Cityscapes. Implementation Details. In all experiments, we resize the shorter side of the image to 600 following with ROI-align . We train the model with SGD optimizer and the initial learning rate is set to 10 −3, then divided by 10 after every 50,000 iterations. Unless otherwise stated, we set λ as 1.0 and γ as 5.0, and we use K = 3 in our experiments (the analysis of hyper-parameter K is shown in Table 7). We report mean average precision (mAP) with an IoU threshold of 0.5 for evaluation. Since there are few pioneer works for exploring the combination of different losses for domain adaptive object detection, here we conduct extensive ablation study for this part to find the best collocation of our SCL method. We follow some objective design from DA and Weak-Strong which provides guidance for us to utilize these losses. Cross-entropy (CE) Loss. CE loss measures the performance of a classification model whose output is a probability value. It increases as the predicted probability diverges from the actual label: where p c ∈ is the predicted probability observation of c class. y c is the c class label. Least-squares (LS) Loss. Following , we adopt LS loss to stabilize the training of the domain classifier for aligning low-level features. The loss is designed to align each receptive field of features with the other domain. The least-squares loss is formulated as: where D Θ (s) wh denotes the output of the domain classifier in each location (w, h). Focal Loss (FL). Focal loss L FL is adopted to ignore easy-to-classify examples and focus on those hard-to-classify ones during training: -adapted)) 30.2 53.5 DA 38.5 64.1 DA (Our impl.) 35.6 70.8 SW (Our impl.) 37.9 71.0 Ours 41.9 72.7 The are summarized in Table 1. We present several combinations of four complementary objectives with their loss names and performance. We observe that "LS-CE-F L-F L" obtains the best accuracy with Context and Detach. It indicates that LS can only be placed on the low-level features (rich spatial information and poor semantic information) and F L should be in the high-level locations (weak spatial information and strong semantic information). For the middle location, CE will be a good choice. If you use LS for the middle/high-level features or use F L on the low-level features, it will confuse the network to learn hierarchical semantic outputs, so that ILoss+detach will lose effectiveness under that circumstance. This verifies that domain adaptive object detection is supported by deep supervision, however, the diverse supervisions should be applied in a controlled manner. Furthermore, our proposed method performed much better than baseline Strong-Weak (37.9% vs.34.3%) and other state-of-the-arts. Between Cityspaces and KITTI. In this part, we focus on studying adaptation between two real and similar domains, as we take KITTI and Cityscapes as our training and testing data. Following , we use KITTI training set which contains 7,481 images. We conduct experiments on both adaptation directions K → C and C → K and evaluate our method using AP of car as in DA. As shown in Table 2, our proposed method performed much better than the baseline and other stateof-the-art methods. Since Strong-Weak didn't provide the on this dataset, we re-implement it and obtain 37.9% AP on K→C and 71.0% AP on C→K. Our method is 4% higher than the former and 1.7% higher than latter. If comparing to the non-adapted (source only), our method outperforms it with a huge margin (about 10% and 20% higher, respectively). INIT Dataset. INIT Dataset contains 132,201 images for training and 23,328 images for testing. There are four domains: sunny, night, rainy and cloudy, and three instance categories, including: car, person, speed limited sign. This dataset is first proposed for the instance-level image-to-image translation task, here we use it for the domain adaptive object detection purpose. Our are shown in Table 3. Following , we conduct experiments on three domain pairs: sunny→night (s2n), sunny→rainy (s2r) and sunny→cloudy (s2c). Since the training images in rainy domain are much fewer than sunny, for s2r experiment we randomly sample the training data in sunny set with the same number of rainy set and then train the detector. It can be observed that our method is consistently better than the baseline method. We don't provide the of s2c (faster) because we found that cloudy images are too similar to sunny in this dataset (nearly the same), thus the non-adapted is very close to the adapted methods. In this section, we focus on the dissimilar domains, i.e., adaptation from real images to cartoon/artistic. Following , we use PASCAL VOC dataset (2007+2012 training and validation combination for training) as the source data and the Clipart or Watercolor as the target data. The backbone network is ImageNet pre-trained ResNet-101. PASCAL to Clipart. Clipart dataset contains 1,000 images in total, with the same 20 categories as in PASCAL VOC. As shown in Table 4, our proposed SCL outperforms all baselines. In addition, we observe that replacing F L with CE loss on instance-context classifier can further improve the performance from 40.6% to 41.5%. More ablation are shown in our Appendix B.2 (Table 10). PASCAL to WaterColor. Watercolor dataset contains 6 categories in common with PASCAL VOC and has totally 2,000 images (1,000 images are used for training and 1,000 test images for evaluation). Results are summarized in Table 5, our SCL consistently outperforms other state-of-the-arts. Sim10K to Cityscapes. Sim 10k dataset contains 10,000 images for training which are generated by the gaming engine Grand Theft Auto (GTA). Following , we use Cityscapes as target domain and evaluate our models on Car class. Our is shown in Table 6, which consistently outperforms the baselines. Hyper-parameter K. Table 7 shows the for sensitivity of hyper-parameter K in Figure 3. This parameter controls the number of SCL losses and context branches. It can be observed that the proposed method performs best when K = 3 on all three datasets. Parameter Sensitivity on λ and γ. Figure 4 shows the for parameter sensitivity of λ and γ in Eq. 8 and Eq. 11. λ is the trade-off parameter between SCL and detection objectives and γ controls the strength of hard samples in Focal Loss. We conduct experiments on two adaptations: Cityscapes → FoggyCityscapes (blue) and Sim10K → Cityscapes (red). On Cityscapes → FoggyCityscapes, we achieve the best performance when λ = 1.0 and γ = 5.0 and the best accuracy is 37.9%. On Sim10K → Cityscapes, the best is obtained when λ = 0.1, γ = 2.0. indicate values from high to low. It can be observed that w/ detach training, our models can learn more discriminative representation between object areas and (context). Analysis of IoU Threshold. The IoU threshold is an important indicator to reflect the quality of detection, and a higher threshold means better coverage with ground-truth. In our previous experiments, we use 0.5 as a threshold suggested by many literature . In order to explore the influence of IoU threshold with performance, we plot the performance vs. IoU on three datasets. As shown in Figure 5, our method is consistently better than the baselines on different threshold by a large margin (in most cases). Why Gradient Detach Can Help Our Model? To further explore why gradient detach can help to improve performance vastly and what our model really learned, we visualize the heatmaps on both source and target images from our models w/o and w/ detach training. As shown in Figure 6, the visualization is plotted with feature maps after Conv B3 in Figure 3. We can observe that the object areas and context from detach-trained models have stronger contrast than w/o detach model (red and blue areas). This indicates that detach-based model can learn more discriminative features from the target object and context. More visualizations are shown in Appendix C (Figure 8). Detection Visualization. Figure 10 shows several qualitative comparisons of detection examples on three test sets with DA , Strong-Weak and our SCL models. Our method detects more small and blurry objects in dense scene (FoggyCityscapes) and suppresses more false positives (Clipart and Watercolor) than the other two baselines. , Strong-Weak and our proposed SCL on three datasets. For each group, the first row is the of DA, the second row is from Strong-Weak and the last row is ours. We show detections with the scores higher than a threshold (0.3 for FoggyCityscapes and 0.5 for other two). In this paper, we have addressed unsupervised domain adaptive object detection through stacked complementary losses. One of our key contributions is gradient detach training, enabled by suppressing gradients flowing back to the detection backbone. In addition, we proposed to use multiple complementary losses for better optimization. We conduct extensive experiments and ablation studies to verify the effectiveness of each component that we proposed. Our experimental outperform the state-of-the-art approaches by a large margin on a variety of benchmarks. Our future work will focus on exploring the domain-shift detection from scratch, i.e., without the pre-trained models like DSOD , to avoid involving bias from the pre-trained dataset. A CONTEXT NETWORK Our context networks are shown in Table 8. We use three branches (forward networks) to deliver the context information and each branch generates a 128-dimension feature vector from the corresponding backbone layers of SCL. Then we naively concatenate them and obtain the final context feature with a 384-dimension vector. In this section, we show the adaptation on source domains in Table 11, 12, 13 and 14. Surprisingly, we observe that the best-trained models (on target domains) are not performing best on the source data, e.g., from PASCAL VOC to WaterColor, DA obtained the highest on source domain images (although the gaps with Strong-Weak and ours are marginal). We conjecture that the adaptation process for target domains will affect the learning and performing on source domains, even we have used the bounding box ground-truth on source data for training. We will investigate it more thoroughly in our future work and we think the community may also need to rethink whether evaluating on source domain should be a metric for domain adaptive object detection, since it can help to understand the behavior of models on both source and target images. We provide the detailed of parameter sensitivity on λ and γ in Table 15 and 16 with the adaptation of from Cityscapes to FoggyCityscapes and from Sim10K to Cityscapes. Figure 9, the gradient detach-based models can adapt source and target images to a similar distribution better than w/o detach models. | We introduce a new gradient detach based complementary objective training strategy for domain adaptive object detection. | 667 | scitldr |
Convolutional neural networks (CNN) have become the most successful and popular approach in many vision-related domains. While CNNs are particularly well-suited for capturing a proper hierarchy of concepts from real-world images, they are limited to domains where data is abundant. Recent attempts have looked into mitigating this data scarcity problem by casting their original single-task problem into a new multi-task learning (MTL) problem. The main goal of this inductive transfer mechanism is to leverage domain-specific information from related tasks, in order to improve generalization on the main task. While recent in the deep learning (DL) community have shown the promising potential of training task-specific CNNs in a soft parameter sharing framework, integrating the recent DL advances for improving knowledge sharing is still an open problem. In this paper, we propose the Deep Collaboration Network (DCNet), a novel approach for connecting task-specific CNNs in a MTL framework. We define connectivity in terms of two distinct non-linear transformation blocks. One aggregates task-specific features into global features, while the other merges back the global features with each task-specific network. Based on the observation that task relevance depends on depth, our transformation blocks use skip connections as suggested by residual network approaches, to more easily deactivate unrelated task-dependent features. To validate our approach, we employed facial landmark detection (FLD) datasets as they are readily amenable to MTL, given the number of tasks they include. Experimental show that we can achieve up to 24.31% relative improvement in landmark failure rate over other state-of-the-art MTL approaches. We finally perform an ablation study showing that our approach effectively allows knowledge sharing, by leveraging domain-specific features at particular depths from tasks that we know are related. Over the past few years, convolutional neural networks (CNNs) have become the leading approach in many vision-related tasks BID12. By creating a hierarchy of increasingly abstract concepts, they can transform complex high-dimensional input images into simple low-dimensional output features. Although CNNs are particularly well-suited for capturing a proper hierarchy of concepts from real-world images, successively training them requires large amount of data. Optimizing deep networks is tricky, not only because of problems like vanishing / exploding gradients BID8 or internal covariate shift BID9, but also because they typically have many parameters to be learned (which can go up to 137 billions BID21). While previous works have looked at networks pre-trained on a large image-based dataset as a starting point for their gradient descent optimization, others have considered improving generalization by casting their original single-task problem into a new multi-task learning (MTL) problem (see BID31 for a review). As BID2 explained in his seminal work: "MTL improves generalization by leveraging the domain-specific information contained in the training signals of related tasks". Exploring new ways to efficiently gather more information from related tasks -the core contribution of our approach -can thus help a network to further improve upon its main task. The use of MTL goes back several years, but has recently proven its value in several domains. As a consequence, it has become a dominant field of machine learning BID30. Although many early and influential works contributed to this field BID5 ), recent major advances in neural networks opened up opportunities for novel contributions in MTL. Works on grasping BID17, pedestrian detection BID24, natural language processing BID14, face recognition BID26 BID27 and object detection BID16 have all shown that MTL has been finally adopted by the deep learning (DL) community as a way to mitigate the lack of data, and is thus growing in popularity. MTL strategies can be divided into two major categories: hard and soft parameter sharing. Hard parameter sharing is the earliest and most common strategy for performing MTL, which dates back to the original work of BID2. Approaches in this category generally share the hidden layers between all tasks, while keeping separate outputs. Recent in the DL community have shown that a central CNN with separate task-specific fully connected (FC) layers can successfully leverage domain-specific information BID18 BID17 BID27. Although hard parameter sharing reduces the risk of over-fitting BID1, shared layers are prone to be overwhelmed by features or contaminated by noise coming from particular noxious related tasks.Soft parameter sharing has been proposed as an alternative to alleviate this drawback, and has been growing in popularity as a potential successor. Approaches in this category separate all hidden layers into task-specific models, while providing a knowledge sharing mechanism. Each model can then learn task-specific features without interfering with others, while still sharing their knowledge. Recent works using one network per task have looked at regularizing the distance between taskspecific parameters with a 2 norm BID4 or a trace norm BID25, training shared and private LSTM submodules, partitioning the hidden layers into subspaces BID19 and regularizing the FC layers with tensor normal priors BID15. In the domain of continual learning, progressive network BID20 has also shown promising for cross-domain sequential transfer learning, by employing lateral connections to previously learned networks. Although all these soft parameter approaches have shown promising potential, improving the knowledge sharing mechanism is still an open problem. In this paper, we thus present the deep collaboration network (DCNet), a novel approach for connecting task-specific networks in a soft parameter sharing MTL framework. We contribute with a novel knowledge sharing mechanism, dubbed the collaborative block, which implements connectivity in terms of two distinct non-linear transformations. One aggregates task-specific features into global features, and the other merges back the global features into each task-specific network. We demonstrate that our collaborative block can be dropped in any existing architectures as a whole, and can easily enable MTL for any approaches. We evaluated our method on the problem of facial landmark detection in a MTL framework and obtained better in comparison to other approaches of the literature. We further assess the objectivity of our training framework by randomly varying the contribution of each related tasks, and finally give insights on how our collaborative block enables knowledge sharing with an ablation study on our DCNet. The content of our paper is organized as follows. We first describe in Section 2 works on MTL closely related to our approach. We also describe Facial landmark detection, our targeted application. Architectural details of our proposed Multi-Task approach and its motivation are spelled out in Section 3. We then present in Section 4 a number of comparative on this Facial landmark detection problem for two CNN architectures, AlexNet and ResNet18, that have been adapted with various MTL frameworks including ours. It also contains discussions on an ablation study showing at which depth feature maps from other tasks are borrowed to improve the main task. We conclude our paper in Section 5.2 RELATED WORK 2.1 MULTI-TASK LEARNING Our proposed deep collaboration network (DCNet) is related to other existing approaches. The first one is the cross-stitch (CS) BID16 ) network, which connects task-specific networks through linear combinations of the spatial feature maps at specific layers. One drawback of CS is that they are limited to capturing linear dependencies only, something we address in our proposed approach by employing non-linearities when sharing feature maps. Indeed, non-linear combinations are usually able to learn richer relationships, as demonstrated in deep networks. Another related approach is tasks-constrained deep convolutional network (TCDCN) for facial landmarks detection. In it, the authors proposed an early-stopping criterion for removing auxiliary tasks before the network starts to over-fit to the detriment of the main task. One drawback of their approach is that their criterion has several hyper-parameters, which must all be selected manually. For instance, they define an hyper-parameter controlling the period length of the local window and a threshold that stops the task when the criterion exceeds it, all of which can be specified for each task independently. Unlike TCDCN, our approach has no hyper-parameters that depend on the tasks at hand, which greatly simplifies the training process. Our two transformation blocks consist of a series of batch normalization, ReLU, and convolutional layers shaped in a standard setting based on recent advances in residual network (see Sec. 3). This is particularly useful for computationally expensive deep networks, since integrating our proposed approach requires no additional hyper-parameter tuning experiments. Our proposed approach is also related to HyperFace BID18. In this work, the authors proposed to fuse the intermediate layers of AlexNet and exploit the hierarchical nature of the features. Their goal was to allow low-level features containing better localization properties to help tasks such as landmark localization and pose detection, and allow more class-specific high-level features to help tasks like face detection and gender recognition. Although HyperFace uses a single shared CNN instead of task-specific CNNs and is not entirely related to our approach, the idea of feature fusion is also central in our work. Instead of fusing the features at intermediate layers of a single CNN, our approach aggregates same-level features of multiple CNNs, at different depth independently. Also, one drawback of HyperFace is that the proposed feature fusion is specific to AlexNet, while our method is not specific to any network. In fact, our approach takes into account the vast diversity of existing network architectures, since it can be added to any architecture without modification. Facial landmark detection (FLD) is an essential component in many face-related tasks BID22 BID10 BID0. This problem can be described as follows: given the image of an individual's face, the goal is to predict the (x, y)-position on the image of specific landmarks associated with key features of the visage. Applications such as face recognition BID3, face validation BID23, facial feature detection and tacking rely on the ability to correctly find the location of these distinct facial landmarks in order to succeed. Localizing facial key points like the center of the eyes, the corners of the mouth, the tip of the nose and the earlobes is however a challenging problem when many lighting conditions, head poses, facial expressions and occlusions increase diversity of the face images. In addition to integrating this variability into the estimation process, a FLD model must also take into account a number of correlated factors. For instance, although both an angry person and a sad person have frowned eyebrows, an angry person will have pinched lips while a sad person will have sunken mouth corners BID6. A particularity of datasets geared towards FLD is that, on top of containing the position of these various facial landmarks, they also contain a number of other labels that can be seen as tasks on their own, such as gender recognition, smile recognition, glasses recognition or face orientation. For this reason, FLD datasets are particularly well-suited to evaluate MTL frameworks. Given T task-specific convolutional neural networks (CNNs), our goal is to share domain-specific information by connecting task-specific networks together using their respective feature maps. Unlike CS, our proposed approach will define this feature sharing in terms of two distinct non-linear transformations. Linear transformations like those used in CS can limit feature sharing ability, unlike ours that can learn complex transformations and properly connect each network. For the sake of simplicity, we suppose that the networks have the same structure, which we refer to as the underlying network. Note that our approach also works with different architectures. Let us Figure 1: Example of our collaborative block applied on the feature maps of five task-specific networks. The input feature maps (shown in part 1) are first concatenated depth-wise and transformed into a global feature map (part 2). The global feature map is then concatenate with each input feature map individually and transformed into task-specific feature maps (part 3). Each ing feature map is then added back to the input feature map using a skip connection (part 4), which gives the final outputs of the block (part 5).further decompose the underlying network as a series of blocks, where each block can be as small as a single layer, as large as the whole network itself, or based on simple rules such as grouping all layers with matching spatial dimensions or grouping every n subsequent layers. The arrangement of the layers into blocks does not change the nature of the network, but instead facilitates the understanding of applying our method. In particular, it makes explicit the depth at which we connect the feature maps via our framework. Since our approach is independent on depth, we will drop the depth index on the feature maps to further simplify the equations. As such, we will define the feature map output of a block at a certain depth as x t, where t ∈ {1 . . . T}, for each task t. Our approach takes as input all x t task-specific feature maps and processes them into new feature maps y t as follows: DISPLAYFORM0 where H and F t represent the central and the task-specific aggregations respectively, and [·] denotes depth-wise concatenation. We refer to as our collaborative block. The goal of H is to combine all task-specific feature maps x t into a global feature map z representing unified knowledge, while the goal of F is to merge back the global feature map z with each input feature map x t individually. As shown in Fig. 1, H and F have the following structure: where BN stands for batch normalization, δ for the ReLU activation function and Conv (h×w) for a standard convolutional layer with filters of size (h × w). The first Conv (1×1) layer in H divides the number of feature maps by a factor of 4, while the first Conv (1×1) layer in F divides it to match the size of x t. DISPLAYFORM1 DISPLAYFORM2 As seen by the presence of a skip connection in, the recent advances in residual network inspired the structure of our collaborative block. Based on the argument by BID7 that a network may more easily learn the proper underlying mapping by using an identity skip connection, we also argue that it may help our task-specific networks to more easily integrate the domain-information of each related task. One of the advantages of identity skip connections is that learning identity mappings can be done by simply pushing all parameters of the residual mapping towards zero. We integrated this observation in our collaborative block by allowing each task-specific network to easily remove the contribution of the global feature map z with an identity skip connection, in order to account for cases where it does not help. As we see in, pushing all parameters of F t towards zero would remove its contribution, and the output of the block would then simply revert to the original input x t.Our motivation for using an identity skip connection around the global feature map z comes from the fact that the depth at which we insert our collaborative block influences the relevance of each task towards another. Considering that a network learns a hierarchy of increasingly abstract features, some task-specific networks may benefit more by sharing their low-level features than by sharing their high-level features. For instance, tasks such as landmark localization and pose detection may profit from their low-level features containing better localization properties, while tasks such as face detection and gender recognition may take advantage of their more class-specific high-level features. Our collaborative block can take into account task relevance by deactivating a different set of residual mappings F t based on the level at which it appears in the network. An example of such specialization will be shown in our ablative study in Section 4.5. FIG0 presents an example of our Deep Collaboration Network (DCNet) using ResNet18 as the underlying network. As we can see in the top part of the figure, this comes down to interleaving the underlying network block structure with our collaborative block. Each collaborative block receives as input the output of each task-specific block, processes them as detailed in Fig. 1, and sends the back to each task-specific network. Adding our approach to any underlying network can be done by simply following the same pattern of interleaving the network block structure with our collaborative block. In each instance, the left column (blue) is for un-pretrained networks, while the right column (green) is for pre-trained networks. Our proposed approach obtains the lowest failure rates overall. In this section, we detail our multi-task learning (MTL) training framework and present our experiments in facial landmark detection (FLD) tasks. We also analyze our approach by performing an ablation study and by experimenting with task importance. As described previously, the problem of facial landmark detection is to predict the (x, y)-position on the image of specific landmarks associated with key features of the visage. While the number and type of landmarks are specific to each dataset, examples of standard landmarks to be predicted are the corners of the mouth, the tip of the nose and the center of the eyes. In addition to the facial landmarks, each dataset further defines a number of related tasks. These related tasks also vary from one dataset to another, and are typically gender recognition, smile recognition, glasses recognition or face orientation. One particularity of our optimization framework is that we treat each task as a classification problem. While this is straightforward for gender, smile and glasses recognition as they are already classification tasks, it is a bit more tricky for face orientation and FLD. For face orientation, instead of predicting the roll, yaw and pitch real value as in a regression problem, we divide each component into 30 degrees wide bins and predict the label of the bin corresponding to the value. Similarly for FLD, rather than predicting the real (x, y)-position of each landmark, we divide the image into 1 pixel wide bins and predict the label of the bin corresponding to the value. Note that we still use the original real values when comparing our prediction with the ground truth, such that we incorporate our approximation errors in the final score. By doing this, do not artificially boost our performance. We report our using the landmark failure rate metric, which is defined as follows: we first compute the mean distance between the predicted landmarks and the ground truth landmarks, then normalize it by the inter-ocular distance from the center of the eyes. A normalized mean distance greater than 10% is reported as a failure. As a first experiment, we performed facial landmark detection on the Multi-Task Facial Landmark (MTFL) dataset. The dataset contains 12,995 face images annotated with five facial landmarks and four related attributes of gender, smiling, wearing glasses and face profile (five profiles in total). The training set has 10,000 images, while the test set has 2,995 images. We perform four sets of experiments using an ImageNet pre-trained AlexNet, an ImageNet pre-trained ResNet18, an un-pretrained AlexNet and an un-pretrained ResNet18 as underlying networks. For AlexNet, we apply our collaborative block after each max pooling layer, while we do as shown in FIG0 for ResNet18. Figure 4: Example predictions of our DCNet with pre-trained ResNet18 as underlying network on the MTFL task. The first two contains failure cases, while last two contains successes. Elements in green correspond to ground truth, while those in blue correspond to our prediction. In addition to providing the facial landmarks (the small dots), we also include the labels of the related tasks: gender, smiling, wearing glasses and face profile. As shown in the first example, over-exposition can have a large impact on the prediction quality. We compare our approach to several other approaches of the literature. We include single-task learning (AN-S when using AlexNet as underlying network, RN-S when using ResNet18), MTL with a single central network (AN and RN), MTL with a single central network that is widen to match the number of parameters of our approach (ANx and RNx), the HyperFace network (HF), the Tasks-Constrained Deep Convolutional Network (TCDCN) and the Cross-Stitch approach (CS). We train each network three times for 300 epochs and report landmark failure rates averaged over the last five epochs, further averaged over the three tries. One we can observe from FIG1 is that our proposed approach obtained the lowest failure rates in each case. Indeed, our DCNet with un-pretrained and pre-trained AlexNet as underlying network obtained 19.99% and 19.93% failure rates respectively, while we obtained 15.32% and 14.34% with ResNet18. This is significantly lower than the other approaches to which we compare ourselves. For instance, with AlexNet, HF had 27.75% and 27.32%, XS had 26.41% and 25.65%, and TCDCN had 25.00% 1, while with ResNet18, XS had 18.43% and 15.52% respectively. We obtained the highest improvements when using AlexNet as the underlying network. With un-pretrained and pre-trained AlexNet, we obtained improvements of 6.42% and 5.07%, while we obtained 1.43% and 1.18% with ResNet18. Performing MTL with our approach can thus improve performance over using other approaches of the literature. FIG1 is that although increasing the number of parameters of multi-task AlexNet (AN) and ResNet18 (RN) can significantly improve performance, connecting task-specific networks with our approach is more efficient. For instance, while AlexNet (ANx) and ResNet18 (RNx) with widened layers that match the number of parameters of our approach lowers the failure rates from 20.05% (RN) and 28.02% (AN) to 16.75% (RNx) and 26.88% (ANx) respectively, our approach with AlexNet and ResNet18 as underlying networks further reduces the failure rates to 15.32% and 19.99%. These show that while simply increasing the number of parameters is an effortless avenue for improving performance, developing novel architectures enhancing network connectivity may open more rewarding research directions for further leveraging the domaininformation of related tasks. FORMULA0's approach and XS for Cross-Stitch. In each instance, the left column (blue) is for un-pretrained networks, while the right column (green) is for pre-trained networks. Our proposed approach (last column) obtains the lowest failure rates overall. As a second experiment, we performed domain adaptation on the Annotated Face in the Wild (AFW) dataset BID33. The dataset has 205 Flickr images, where each image can contain more than one face. Instead of using the images as provided, we process them using the available facial bounding boxes. We extract all faces with visible landmarks, which gives a total of 377 face images. We then take each network of Section 4.2 trained on the MTFL dataset and evaluate them without fine-tuning on these images. FIG4 presents the of this experiment. As it was the case in Section 4.2, our approach obtained the best overall. Indeed, our DCNet with un-pretrained and pre-trained AlexNet as underlying network obtained 43.77% and 45.62% failure rates respectively, while it obtained 37.75% and 37.84% with ResNet18. This is significantly lower then the other approaches. For instance, with AlexNet, HF had 48.54% and 51.81%, and XS had 48.98% and 49.43%, while with ResNet18, XS had 43.24% and 40.58% respectively. Unlike what we observed in Section 4.2, our approach obtained the highest improvement with AlexNet when using an un-pretrained underlying network, while it obtained the highest improvement with ResNet18 when using a pre-trained underlying network. Indeed, DCNet with un-pretrained and pretrained AlexNet obtained 4.24% and 2.39% improvements, while it obtained 3.10% and 2.74% with ResNet18 respectively. An interesting we can observe from FIG4 is that approaches with pre-trained AlexNet did not perform as well as those with pre-trained ResNet18, but rather increased the failure rates in comparison to the ones with un-pretrained AlexNet. For instance, single-task un-pretrained AlexNet obtained 48.72%, while single-task pre-trained AlexNet obtained a higher 50.84%. We also see a similar failure rate degradation for HF, XS and our approach. This is not the case when using ResNet18. Indeed, single-task un-pretrained ResNet18 obtained 44.92%, while single-task pretrained ResNet18 obtained a lower 41.20%. Although the dataset is small and more experiments would help to better understand why this is happening, these suggests that ResNet18 is more capable of adapting its pre-trained features for domain adaptation. As third experiment, we evaluate the influence of the number of training examples on MTL, using the Annotated Facial Landmarks in the Wild (AFLW) dataset BID11. The dataset has 21,123 Flickr images, where each image can contain more than one face. Instead of using the images as provided, we process them using the available facial bounding boxes. We extract all faces with visible landmarks, which gives a total of 2,111 images. This dataset defines 21 facial landmarks and has 3 related tasks (gender, wearing glasses and face orientation). For face orientation, we divide the roll, yaw and pitch into 30 degrees wide bins (14 bins in total), and predict the label corresponding to each real value. Figure 6: Landmark failure rate progression (in %) on the AFLW dataset with varying train / test ratio using a pre-trained ResNet18 as base network. Each curve is the average over three tries. Even though our approach has the slowest convergence rate, it outperforms the others in four of the five cases. Our experiment works as follows. With a pre-trained ResNet18 as underlying network, we compare our approach to single-task ResNet18 (RN-S), multi-task ResNet18 (RN) and Cross-Stitch network (XS) by training on a varying number of images. We use five different train / test ratios, starting with 0.1 / 0.9 up to 0.9 / 0.1 by 0.2 increment. In other words, we train each approaches on the first 10% of the available images and test on the other 90%, then repeat for all the other train / test ratios. We use the same training framework as in section 4.2. We train each network three times for 300 epochs, and report the landmark failure rate averaged over the last five epochs, further averaged over the three tries. As we can see in TAB2, our approach obtained the best performance in all cases except the first one. Indeed, we observe between 1.46% and 3.05% improvements with train / test ratios from 0.3 / 0.7 to 0.9 / 0.1, while we obtain a negative relative change of 4.90% with train / test ratio of 0.1 / 0.9. In fact, since all multi-task approaches obtained higher failure rates than the single-task approach, this suggests that the networks are over-fitting the small training set. Nonetheless, these show that we can obtain better performance using our approach. Figure 6 presents the landmark failure rate progression on all train / test ratios for each network. One interesting we can see from these figures is that our approach has the slowest convergence rate during the first stage of the training process. For instance, with a train / test ratio of 0.9 / 0.1, our Figure 7: Landmark failure rate improvement (in %) of our approach compared to XS when sampling random task weights. We used a pre-trained ResNet18 as underlying network. The histogram at the left and the plot at the top right represents performance improvement achieved by our proposed approach (positive value means lower failure rates), while the plot at the bottom right corresponds to the log of the task weights. Our approach outperformed XS in 86 out of the 100 tries, thus empirically demonstrating that our learning framework was not unfavorable towards XS and that our approach is less sensitive to the task weights λ.approach converges at about epoch 100, while the others start converging at about epoch 50. In fact, while the other methods have similar convergence rate, the epoch at which our approach converges increases as the number of training images decrease. Indeed, our approach converges at about epoch 130 with a train / test ratio of 0.5 / 0.5, while it converges at about epoch 160 with a train / test ratio of 0.1 / 0.9. Even though the convergence rate is slower, our approach still converges to a similar train failure rate. This gives a smaller train-to-test gap, which indicates that our approach has better generalization abilities. One particularity that we observe in TAB2 is that the XS network has relatively high failure rates. In the previous experiments of sections 4.2 and 4.3, XS had either similar or better performance than the other approaches (except ours). This could be due to our current multi-task learning framework that is unfavorable towards XS, which may prevent it from leveraging the domain-information of the related tasks. In order to investigate whether this is the case, we perform the following additional experiment. Using a pre-trained ResNet18 as underlying network, we compare our approach to XS by training each network 100 times using task weights randomly sampled from a log-uniform distribution. Specifically, we first sample from a uniform distribution γ ∼ U(log(1e−4), log), then use λ = exp(γ) as the weight. We trained both XS and our approach for 300 epochs with the same task weights using a train / test ratio of 0.5 / 0.5. Figure 7 presents the of this experiment. The plot at the top right of the figure represents the landmark failure rate improvement (in %) of our approach compared to XS, while the plot at the bottom right corresponds to the log of the task weights for each try. In 86 out of the 100 tries, our approach had a positive failure rate improvement, that is, obtained lower failure rates than XS. Moreover, as we can see in the histogram at the left of Fig. 7, in addition being normally distributed around a mean of 2.78%, our approach has a median failure rate improvement of 3.14% and a maximum improvement of 8.45%. These show that even though we sampled at random different weights for the related tasks, our approach outperforms XS in the majority of the cases. Our learning framework is therefore not unfavorable toward XS. As a final experiment, we perform an ablation study on our approach using the MTFL dataset with an un-pretrained ResNet18 as base network. The goal of this experiment is to test whether the observed performance improvements is due to the increased ability of each task-specific network to share their respective features, rather than only due to the intrinsic added representative ability of using additional non-linear layers. Our experiment works as follows: we evaluate the impact of removing the contribution of each task-specific network by masking their respective feature maps at the input of the central aggregation transformation. By referring to Fig. 1, this corresponds to Figure 8: Results of our ablation study on the MTFL dataset with an un-pretrained ResNet18 as underlying network. We remove each task-specific features from each respective central aggregation layer and evaluate the effect on landmark failure rate. The columns represent the task-specific network, while the rows correspond to the network block structure. Blocks with a high saturated color were found to have a large impact on performance. For instance, this ablative study shows that the influence of high-level face profile features is large within our proposed architecture, which corroborates with the well-known fact that facial landmark locations are highly correlated to profile orientation. This thus constitutes an empirical evidence of domain-specific information sharing via our approach.zeroing out the designated feature maps during concatenation in part 2 of the collaborative block (at the bottom of the figure). Note that the collaborative ResNet18 is trained on the MTFL dataset using the same framework as explained in Sec. 4.1, and the ablation study is performed on the test set. Figure 8 presents the of our ablation study. The columns represent each task-specific network, while the rows correspond to the network block structure. The blocks are ordered from bottom (input) to top (prediction), while the task-specific networks are ordered from left (main task) to right (related tasks). The color saturation indicates the influence of removing the task-specific feature maps from the corresponding central aggregation. A high saturation reflects high influence, while a low saturation reflects low influence. A first interesting that we can see from Fig. 8 is that removing features from the facial landmark detection network significantly increases landmark failure rate. For instance, we observe a negative (worse) relative change of 29.72% and 47.00% in failure rate by removing features from B3 and B2 respectively. This is interesting, as it shows that the main-task network contributes to and feed from the global features computed by the central aggregation transformation. Note that due to using a skip connection between the input and the global features, the network can remove the contribution of the global features by simply zeroing out its task-specific aggregation weights. These show that the opposite is instead happening, where the task-specific features from the facial landmark network influence the quality of the computed global features, which in turn influence the quality of the subsequent task-specific features. Another interesting is that B5 from the face profile-related task has the highest influence on failure rate. Indeed, we observe a negative relative change of 83.87% by removing the corresponding features maps from the central aggregation. Knowing that face orientation has a direct impact on the location of the facial landmarks, it makes sense that features from the head related task would be useful for improving landmark predictions. What is interesting in this case is that B5 has the highest-level features of all blocks, due to being at the top of the hierarchy of increasingly abstract features. Since we expect the highest-level features of the head network to resemble a standard rotation matrix, it is evident that the landmark network would use these rich features to better rotate the predicted facial landmarks. This is what we observe in Fig. 8. These constitute an empirical evidence that our approach not only allows leveraging domain-specific information from related tasks, but also improves knowledge sharing with better network connectivity. In this paper, we proposed the deep collaboration network (DCNet), a novel approach for connecting task-specific networks in a multi-task learning setting. It implements feature connectivity and sharing through two distinct non-linear transformations inside a collaborative block, which also incorporates skip connection and residual mapping that are known for their good training behavior. The first transformation aggregates the task-specific feature maps into a global feature map representing unified knowledge, and the second one merges it back into each task-specific network. One key characteristic of our collaborative blocks is that they can be dropped in virtually any existing architectures, making them universal adapters to endow deep networks with multi-task learning capabilities. Our on the MTFL, AFW and AFLW datasets showed that our DCNet outperformed several state-of-the-art approaches, including cross-stitch networks. Our additional ablation study, using ResNet18 as underlying network, confirmed our intuition that the task-specific networks exploited the added flexibility provided by our approach. Additionally, these task-specific networks successfully incorporated features having varying levels of abstraction. Evaluating our proposed approach on other MTL problems could be an interesting avenue for future works. For instance, the recurrent networks used to solve natural language processing problems could benefit from incorporating our novel method leveraging domain-information of related tasks. | We propose a novel approach for connecting task-specific networks in a multi-task learning setting based on recent residual network advances. | 668 | scitldr |
Zero-Shot Learning (ZSL) is a classification task where some classes referred as unseen classes have no labeled training images. Instead, we only have side information (or description) about seen and unseen classes, often in the form of semantic or descriptive attributes. Lack of training images from a set of classes restricts the use of standard classification techniques and losses, including the popular cross-entropy loss. The key step in tackling ZSL problem is bridging visual to semantic space via learning a nonlinear embedding. A well established approach is to obtain the semantic representation of the visual information and perform classification in the semantic space. In this paper, we propose a novel architecture of casting ZSL as a fully connected neural-network with cross-entropy loss to embed visual space to semantic space. During training in order to introduce unseen visual information to the network, we utilize soft-labeling based on semantic similarities between seen and unseen classes. To the best of our knowledge, such similarity based soft-labeling is not explored for cross-modal transfer and ZSL. We evaluate the proposed model on five benchmark datasets for zero-shot learning, AwA1, AwA2, aPY, SUN and CUB datasets, and show that, despite the simplicity, our approach achieves the state-of-the-art performance in Generalized-ZSL setting on all of these datasets and outperforms the state-of-the-art for some datasets. Supervised classifiers, specifically Deep Neural Networks, need a large number of labeled samples to perform well. Deep learning frameworks are known to have limitations in fine-grained classification regime and detecting object categories with no labeled data; ). On the contrary, humans can recognize new classes using their previous knowledge. This power is due to the ability of humans to transfer their prior knowledge to recognize new objects . Zero-shot learning aims to achieve this human-like capability for learning algorithms, which naturally reduces the burden of labeling. In zero-shot learning problem, there are no training samples available for a set of classes, referred to as unseen classes. Instead, semantic information (in the form of visual attributes or textual features) is available for unseen classes (; 2014). Besides, we have standard supervised training data for a different set of classes, referred to as seen classes along with the semantic information of seen classes. The key to solving zero-shot learning problem is to leverage trained classifier on seen classes to predict unseen classes by transferring knowledge analogous to humans. Early variants of ZSL assume that during inference, samples are only from unseen classes. Recent observations; realize that such an assumption is not realistic. Generalized ZSL (GZSL) addresses this concern and considers a more practical variant. In GZSL there is no restriction on seen and unseen classes during inference. We are required to discriminate between all the classes. Clearly, GZSL is more challenging because the trained classifier is generally biased toward seen classes. In order to create a bridge between visual space and semantic attribute space, some methods utilize embedding techniques (; ; ; ; ; ; ; ; ; ; ;) and the others use semantic similarity between seen and unseen classes . Semantic similarity based models represent each unseen class as a mixture of seen classes. While the embedding based models follow three various directions; mapping visual space to semantic space (; ; ; ; ;), mapping semantic space to the visual space (; ;), and finding a latent space then mapping both visual and semantic space into the joint embedding space;;; ). The loss functions in embedding based models have training samples only from the seen classes. For unseen classes, we do not have any samples. It is not difficult to see that this lack of training samples biases the learning process towards seen classes only. One of the recently proposed techniques to address this issue is augmenting the loss function with some unsupervised regularization such as entropy minimization over the unseen classes. Another recent methodology which follows a different perspective is deploying Generative Adversarial Network (GAN) to generate synthetic samples for unseen classes by utilizing their attribute information; ). Although generative models boost the significantly, it is difficult to train these models. Furthermore, the training requires generation of large number of samples followed by training on a much larger augmented data which hurts their scalability. The two most recent state-of-the-art GZSL methods, CRnet and COSMO , both employ a complex mixture of experts approach. CRnet is based on k-means clustering with an expert module on each cluster (seen class) to map semantic space to visual space. The output of experts (cooperation modules) are integrated and finally sent to a complex loss (relation module) to make a decision. CRnet is a multi-module (multi-network) method that needs end-to-end training with many hyperparameters. Also COSMO is a complex gating model with three modules: a seen/unseen classifier and two expert classifiers over seen and unseen classes. Both of these methods have many modules, and hence, several hyperparameters; architectural, and learning decisions. A complex pipeline is susceptible to errors, for example, CRnet uses k-means clustering for training and determining the number of experts and a weak clustering will lead to bad . Our Contribution: We propose a simple fully connected neural network architecture with unified (both seen and unseen classes together) cross-entropy loss along with soft-labeling. Soft-labeling is the key novelty of our approach which enables the training data from the seen classes to also train the unseen class. We directly use attribute similarity information between the correct seen class and the unseen classes to create a soft unseen label for each training data. As a of soft labeling, training instances for seen classes also serve as soft training instance for the unseen class without increasing the training corpus. This soft labeling leads to implicit supervision for the unseen classes that eliminates the need for any unsupervised regularization such as entropy loss in. Soft-labeling along with crossentropy loss enables a simple MLP network to tackle GZSL problem. Our proposed model, which we call Soft-labeled ZSL (SZSL), is simple (unlike GANs) and efficient (unlike visual-semantic pairwise embedding models) approach which achieves the state-of-the-art performance in Generalized-ZSL setting on all five ZSL benchmark datasets and outperforms the state-of-the-art for some of them. In zero-shot learning problem, a set of training data on seen classes and a set of semantic information (attributes) on both seen and unseen classes are given. The training dataset includes n samples where x i is the visual feature vector of the i-th image and y i is the class label. All samples in D belong to seen classes S and during training there is no sample available from unseen classes U. The total number of classes is C = |S| + |U|. Semantic information or attributes a k ∈ R a, are given for all C classes and the collection of all attributes are represented by attribute matrix A ∈ R a×C. In the inference phase, our objective is to predict the correct classes (either seen or unseen) of the test dataset D. The classic ZSL setting assumes that all test samples in D belong to unseen classes U and tries to classify test samples only to unseen classes U. While in a more realistic setting i.e. GZSL, there is no such an assumption and we aim at classifying samples in D to either seen or unseen classes S ∪ U. 3 PROPOSED METHODOLOGY 3.1 NETWORK ARCHITECTURE As Figure 1 illustrates our architecture, We map visual space to semantic space, then compute the similarity score (dot-product) between true attributes and the attribute/semantic representation of the input (x). Finally, the similarity score is fed into a Softmax, and the probability of all classes are computed. For the visual features as the input, in all five benchmark datasets, we use the extracted visual features by a pre-trained ResNet-101 on ImageNet provided by. We do not fine-tune the CNN that generates the visual features unlike model in. In this sense, our proposed model is also fast and straightforward to train. In ZSL problem, we do not have any training instance from unseen classes, so the output nodes corresponding to unseen classes are always inactive during learning. Standard supervised training with cross entropy loss biases the network towards seen classes only. The true labels (hard labels) used for training only represent seen classes so the cross entropy cannot penalize unseen classes. Moreover, the available similarity information between the seen and unseen attributed is never utilized. We propose soft labeling based on the similarity between semantic attributes. For each seen sample, we represent its relationship to unseen categories by obtaining semantic similarity (dot-product) using the seen class attribute and all the unseen class attributes. In the simplest form, for every training data, we can find the nearest unseen class to the correct seen class label and assign a small probability q (partial membership or soft label) of this instance to be from the closest unseen class. Note, each training sample only contains a label which comes from the set of seen classes. With soft labeling, we enrich the label with partial assignments to unseen classes and as shows, soft labels act as a regularizer which allows each training case to enforce much more constraint on weights. In a more general soft labeling approach, we propose assigning a probability to all the unseen classes. A natural choice is to transform seen-to-unseen similarities to probabilities (soft labels) shown in Equation. The unseen distribution is obtained for each seen class by calculating dot-product of seen class attribute and all unseen classes attributes and squashing all these dot-product values by Softmax to acquire probabilities. In this case, we distribute the probability q among all unseen classes based on the obtained unseen distribution. This proposed strategy in a soft label for each seen image during training, which as we show later helps the network to learn unseen categories. In order to control the flatness of the unseen distribution, we utilize temperature parameter τ. Higher temperature in flatter distribution over unseen categories and lower temperature creates a more ragged distribution with peaks on nearest unseen classes. A small enough temperature basically in the nearest unseen approach. The Impact of temperature τ on unseen distribution is depicted in Figure 3.a for a particular seen class. Soft labeling implicitly introduces unseen visual features into the network without generating fake unseen samples as in generative methods; ). Hence our proposed approach is able to reproduce same effect as in generative models without the need to create fake samples and train generative models that are known to be difficult to train. Below is the formal description of temperature Softmax: where a i is the i-th column of attribute matrix A ∈ R a×C which includes both seen and unseen class attributes: And s i,j is the true similarity score between two classes i, j based on their attributes. τ and q are temperature parameter and total probability assigned to unseen distribution, respectively. Also y u i,k is the soft label (probability) of unseen class k for seen class i. It should be noted that q is the sum of all unseen soft labels i.e. k∈U y u i,k = q. The proposed method is a multi-class probabilistic classifier that produces a C-dimensional vector of class probabilities p for each sample x i as p(C-dimensional vector of all similarity scores of an input sample. The predicted similarity score between semantic representation of sample x i and attribute a k isŝ i,k g w (x i), a k. Each element of vector p, represents an individual class probability that can be shown below: This Softmax as the activation function of the last layer of the network is calculated on all classes. An established choice to train a multi-class probabilistic classifier is the cross-entropy loss which we later show naturally integrates our idea of soft labeling. Inspired by , in addition to the cross-entropy loss over soft targets, we also consider cross entropy-loss over true labels (hard labels) to improve the performance. During training, we aim at learning the nonlinear mapping g w i.e. obtaining network weights W through: where λ and γ are regularization factors which are obtained through hyperparameter tuning, and L(x i) is the weighted sum of cross-entropy loss over soft labels (L soft) and cross-entropy loss over hard labels (L hard) for each sample as shown below: where α ∈ is a hyperparameter. For better understanding, the hard-loss and soft-loss terms for each sample x i (or x for simplicity) are expanded and elaborated. The hard-loss term is a conventional cross-entropy loss L hard (x) = − C k=1 z k log(p k), where z k is the hard label. Clearly, hard-loss term alone does not work in ZSL regime since it does not penalize unseen classes. The soft-loss term is expanded to seen and unseen terms as follows: Utilizing Equation Hence the first two terms of L soft (x) is the weighted sum of cross-entropy of seen classes and crossentropy of unseen classes. In particular, first term penalizes and controls the relative (normalized) probabilities within all seen classes and the second term acts similarly within unseen classes. We also require to penalize the total probability of all seen classes (1 −q) and total probability of all unseen classes (q). This is accomplished through the last two terms of Equation which is basically a binary cross entropy loss. Intuitively soft-loss in Equation works by controlling the balance within seen/unseen classes (first two terms) as well as the balance between seen and unseen classes (last two terms). As we have shown in Equation, soft-loss enables the classifier to learn unseen classes by only being exposed to samples from seen classes. Hyperparameter q acts as a trade-off coefficient between seen and unseen cross-entropy losses (Figure 2). We can see that the regularizer is a weighted cross entropy on unseen class, which leverages similarity structure between attributes compared to uniform entropy function of DCN. DCN and all prior works use uniform entropy as regularizer, which does not capitalize on the known semantic similarity information between seen and unseen class attributes. At the inference time, our proposed SZSL method works the same as a conventional classifier, we only need to provide the test image and the network will produce class probabilities for all seen and unseen classes. We conduct comprehensive comparison of our proposed SZSL model with the state-of-the-art methods for GZSL settings on five benchmark datasets (Table 1). We present the detailed description of datasets in Appendix A. Our model outperforms the state-of-the-art methods on GZSL setting for all benchmark datasets. For the purpose of validation, we employ the validation splits provided along with the Proposed Split (PS) to perform cross-validation for hyper-parameter tuning. The main objective of GZSL is to simultaneously improve seen samples accuracy and unseen samples accuracy i.e. imposing a trade-off between these two metrics. As the , the standard GZSL evaluation metric is harmonic average of seen and unseen accuracy. This metric is chosen to encourage the network not be biased toward seen classes. Harmonic average of accuracies is defined as where A S and A U are seen and unseen accuracies, respectively. 11.3 74.6 19.6 8.0 73.9 14.4 3.7 55.7 6.9 23.5 59.2 33.6 14.7 30.5 19.8 ConSE 0.4 88.6 0.8 0.5 90.6 1.0 0.0 91.2 0.0 1.6 72.2 3.1 6.8 39.9 11.6 Sync 8.9 87.3 16.2 10.0 90.5 18.0 7.4 66.3 13.3 11.5 70.9 19.8 7.9 43.3 13.4 DeViSE 13.4 68.7 22.4 17.1 74.7 27.8 4.9 76.9 9.2 23.8 53.0 32.8 16.9 27.4 20.9 CMT 0.9 87.6 1.8 8.7 89.0 15.9 1.4 85.2 2.8 7.2 49.8 12.6 8.1 21.8 11.8 Generative Models f-CLSWGAN 57.9 61.4 59.6 ------43.7 57.7 49.7 42.6 36.6 39.4 SP-AEN 23.3 90.9 37.1 ---13.7 63.4 13.7 34.7 70.6 46.6 24.9 38.6 30.3 cycle-UWGAN 59.6 63.4 59. To evaluate SZSL, we follow the popular experimental framework and the Proposed Split (PS) in for splitting classes into seen and unseen classes to compare GZSL/ZSL methods. Utilizing PS ensures that none of the unseen classes have been used in the training of ResNet-101 on ImageNet. The input to the model is the visual features of each image sample extracted by a pre-trained ResNet-101 on ImageNet provided by. The dimension of visual features is 2048. We utilized Keras with TensorFlow back-end to implement our model. We used proposed unseen classes for validation (3-fold CV) and added 20% of train samples (seen classes) as seen validation samples to obtain GZSL validation sets. We crossvalidate τ ∈ [10 −2, 10], mini-batch size ∈ {64, 128, 256, 512, 1024}, q ∈, α ∈, hidden layer size ∈ {128, 256, 512, 1024, 1500} and activation function ∈{tanh, sigmoid, hard-sigmoid, relu} to tune our model. To obtain statistically consistent , the reported accuracies are averaged over 5 trials (using different initialization) after tuning hyper-parameters with cross-validation. Also we ran our experiments on a machine with 56 vCPU cores, Intel(R) Xeon(R) CPU E5-2660 v4 @ 2.00GHZ and 2 NVIDIA-Tesla P100 GPUs each with 16GB memory. The code is provided in the supplementary material. To demonstrate the effectiveness of SZSL model in GZSL setting, we comprehensively compare our proposed method with state-of-the-art GZSL models in Table 2. Since we use the standard proposed split, the published of other GZSL models are directly comparable. As reported in Table 2, accuracies of our model achieves the state-of-the-art GZSL performance on all five benchmark datasets and outperforms the state-of-the-art on AwA2 and aPY datasets. It is exciting and motivating while our architecture is much simpler compared to recently proposed CRnet and COSMO, yet, we achieve similar or better accuracies compared to them. We have only one simple fully connected neural network with 2 trainable layers, compared to CRnet with K mixture of experts followed by relation module with complex loss functions (pairwsie). Soft labeling employed in SZSL gives the model new flexibility to trade-off between seen and unseen accuracies during training and attain a higher value of harmonic accuracy A H, which is the standard metric for GZSL. Assigned unseen soft labels (unseen probability q) enables the classifier to gain more confidence in recognizing unseen classes, which in turn in considerably higher unseen accuracy A U. As the classifier is now discriminating between more classes we get marginally lower seen accuracy A S. However, balancing A S and A U with the cost of deteriorating A S leads to much higher A H. This trade-off phenomenon is depicted in Figure 2 for all datasets. The flexibility provided by soft labeling is examined by obtaining accuracies for different values of q. In Figure 2.a and 2.b, by increasing total unseen probability q, A U increases and A S decreases as expected. From the trade-off curves, there is an optimal q where A H takes its maximum value as shown in Figure 2. Maximizing A H is the primary objective in a GZSL problem that can be achieved by semantic similarity based soft labeling and the trade-off knob, q. It should be noted that both AwA and aPY datasets (Figure 2 .a and 2.b) are coarse-grained class datasets. In contrast, CUB and SUN datasets are fine-grained with hundreds of classes and highly unbalanced seen-unseen split, and hence their accuracies have different behavior concerning q, as shown in Figure 2.c and 2.d. However, harmonic average curve still has the same behavior and possesses a maximum value at an optimal q. We illustrate the intuition with AwA dataset , a ZSL benchmark dataset. Consider a seen class squirrel. We compute closest unseen classes to the class squirrel in terms of attributes. We naturally find that the closest class is rat and the second closest is bat, while other classes such as horse, dolphin, sheep, etc. are not close (Figure 3.a). This is not surprising as squirrel and rat share several attribute. It is naturally desirable to have a classifier that gives rat higher probability than other classes. If we force this softly, we can ensure that classifier is not blind towards unseen classes due to lack of any training example. From a learning perspective, without any regularization, we cannot hope classifier to classify unseen classes accurately. This problem was identified in, where they proposed entropy- based regularization in the form of Deep Calibration Network (DCN). DCN uses cross-entropy loss for seen classes, and regularize the model with entropy loss on unseen classes to train the network. Authors in DCN postulate that minimizing the uncertainty (entropy) of predicted unseen distribution of training samples, enables the network to become aware of unseen visual features. While minimizing uncertainty is a good choice of regularization, it does not eliminate the possibility of being confident about the wrong unseen class. Clearly in DCN's approach, for the above squirrel example, the uncertainty can be minimized even when the classifier gives high confidence to a wrong unseen class dolphin on an image of seen class squirrel. Utilizing similarity based soft-labeling implicitly regularizes the model in a supervised fashion. The similarity values naturally has information of how much certainty we want for specific unseen class. We believe that this supervised regularization is the critical difference why our model outperforms DCN with a significant margin. Figure 3 shows the effect of τ and the consequent assigned unseen distribution on accuracies for AwA dataset. Small τ enforces q to be concentrated on nearest unseen class, while large τ spread q over all the unseen classes and basically does not introduce helpful unseen class information to the classifier. The optimal value for τ is 0.2 for AwA dataset as depicted in Figure 3.b. The impact of τ on the assigned distribution for unseen classes is shown in Figure 3.a when seen class is squirrel in AwA dataset. Unseen distribution with τ = 0.2, well represents the similarities between seen class (squirrel) and similar unseen classes (rat, bat, bobcat) and basically verifies the of Figure 3.b where τ = 0.2 is the optimal temperature. While in the extreme cases, when τ = 0.01, distribution on unseen classes in mostly focused on the nearest unseen class, rat, and consequently the other unseen classes' similarities are ignored. Also τ = 10 flattens the unseen distribution which in high uncertainty and does not contribute helpful unseen class information to the learning. We proposed a discriminative GZSL classifier with visual-to-semantic mapping and cross-entropy loss. During training, while SZSL is trained on a seen class, it simultaneously learns similar unseen classes through soft labels based on semantic class attributes. We deploy similarity based soft labeling on unseen classes that allows us to learn both seen and unseen signatures simultaneously via a simple architecture. Our proposed soft-labeling strategy along with cross-entropy loss leads to a novel regularization via generalized similarity-based weighted cross-entropy loss that can successfully tackle GZSL problem. Soft-labeling offers a trade-off between seen and unseen accuracies and provides the capability to adjust these accuracies based on the particular application. We achieve state-of-the-art performance, in GZSL setting, on all five ZSL benchmark datasets while keeping the model simple, efficient and easy to train. | How to use cross-entropy loss for zero shot learning with soft labeling on unseen classes : a simple and effective solution that achieves state-of-the-art performance on five ZSL benchmark datasets. | 669 | scitldr |
In complex tasks, such as those with large combinatorial action spaces, random exploration may be too inefficient to achieve meaningful learning progress. In this work, we use a curriculum of progressively growing action spaces to accelerate learning. We assume the environment is out of our control, but that the agent may set an internal curriculum by initially restricting its action space. Our approach uses off-policy reinforcement learning to estimate optimal value functions for multiple action spaces simultaneously and efficiently transfers data, value estimates, and state representations from restricted action spaces to the full task. We show the efficacy of our approach in proof-of-concept control tasks and on challenging large-scale StarCraft micromanagement tasks with large, multi-agent action spaces. The value of curricula has been well established in machine learning, reinforcement learning, and in biological systems. When a desired behaviour is sufficiently complex, or the environment too unforgiving, it can be intractable to learn the behaviour from scratch through random exploration. Instead, by "starting small" , an agent can build skills, representations, and a dataset of meaningful experiences that allow it to accelerate its learning. Such curricula can drastically improve sample efficiency . Typically, curriculum learning uses a progression of tasks or environments. Simple tasks that provide meaningful feedback to random agents are used first, and some schedule is used to introduce more challenging tasks later during training . However, in many contexts neither the agent nor experimenter has such unimpeded control over the environment. In this work, we instead make use of curricula that are internal to the agent, simplifying the exploration problem without changing the environment. In particular, we grow the size of the action space of reinforcement learning agents over the course of training. At the beginning of training, our agents use a severely restricted action space. This helps exploration by guiding the agent towards rewards and meaningful experiences, and provides low variance updates during learning. The action space is then grown progressively. Eventually, using the most unrestricted action space, the agents are able to find superior policies. Each action space is a strict superset of the more restricted ones. This paradigm requires some domain knowledge to identify a suitable hierarchy of action spaces. However, such a hierarchy is often easy to find. Continuous action spaces can be discretised with increasing resolution. Similarly, curricula for coping with the large combinatorial action spaces induced by many agents can be obtained from the prior that nearby agents are more likely to need to coordinate. For example, in routing or traffic flow problems nearby agents or nodes may wish to adopt similar local policies to alleviate global congestion. Our method will be valuable when it is possible to identify a restricted action space in which random exploration leads to significantly more meaningful experiences than random exploration in the full action space. We propose an approach that uses off-policy reinforcement learning to improve sample efficiency in this type of curriculum learning. Since data from exploration using a restricted action space is still valid in the Markov Decision Processes (MDPs) corresponding to the less restricted action spaces, we can learn value functions in the less restricted action space with'off-action-space' data collected by exploring in the restricted action space. In our approach, we learn value functions corresponding to each level of restriction simultaneously. We can use the relationships of these value functions to each other to accelerate learning further, by using value estimates themselves as initialisations or as bootstrap targets for the less restricted action spaces, as well as sharing learned state representations. Empirically, we first demonstrate the efficacy of our approach in two simple control tasks, in which the resolution of discretised actions is progressively increased. We then tackle a more challenging set of problems with combinatorial action spaces, in the context of StarCraft micromanagement with large numbers of agents. Given the heuristic prior that nearby agents in a multiagent setting are likely to need to coordinate, we use hierarchical clustering to impose a restricted action space on the agents. Agents in a cluster are restricted to take the same action, but we progressively increase the number of groups that can act independently of one another over the course of training. Our method substantially improves sample efficiency on a number of tasks, outperforming learning any particular action space from scratch, a number of ablations, and an actor-critic baseline that learns a single value function for the behaviour policy, as in the work of. Code is available, but redacted here for anonymity. Curriculum learning has a long history, appearing at least as early as the work of in reinforcement learning, and for the training of neural networks since. In supervised learning, one typically has control of the order in which data is presented to the learning algorithm. For learning with deep neural networks, explored the use of curricula in computer vision and natural language processing. Many approaches use handcrafted schedules for task curricula, but others (; ;) study diagnostics that can be used to automate the choice of task mixtures throughout training. In a self-supervised control setting, use sensitivity analysis to automatically define a curriculum over action dimensions and prioritise their search space. In some reinforcement learning settings, it may also be possible to control the environment so as to induce a curriculum. With a resettable simulator, it is possible to use a sequence of progressively more challenging initial states . With a procedurally generated task, it is often possible to automatically tune the difficulty of the environments . Similar curricula also appear often in hierarchical reinforcement learning, where skills can be learned in comparatively easy settings and then composed in more complex ways later . use more general inter-task mappings to transfer Q-values between tasks that do not share state and action spaces. In adversarial settings, one may also induce a curriculum through self-play (; ;). In this case, the learning agents themselves define the changing part of the environment. A less invasive manipulation of the environment involves altering the reward function. Such reward shaping allows learning policies in an easier MDP, which can then be transferred to the more difficult sparse-reward task . It is also possible to learn reward shaping on simple tasks and transfer it to harder tasks in a curriculum . In contrast, learning with increasingly complex function approximators does not require any control of the environment. In reinforcement learning, this has often taken the form of adaptively growing the resolution of the state space considered by a piecewise constant discretised approximation (; ;). study continual complexification in the context of coevolution, growing the complexity of neural network architectures through the course of training. These works progressively increase the capabilities of the agent, but not with respect to its available actions. In the context of planning on-line with a model, there are a number of approaches that use progressive widening to consider increasing large action spaces over the course of search , including in planning for continuous action spaces (Couëtoux et al., 2011). However, these methods cannot directly be applied to grow the action space in the model-free setting. A recent related work tackling our domain is that of , who train mixtures of two policies with an actor-critic approach, learning a single value function for the current mixture of policies. The mixture contains a policy that may be harder to learn but has a higher performance ceiling, such as a policy with a larger action space as we consider in this work. The mixing coefficient is initialised to only support the simpler policy, and adapted via population based training. In contrast, we simultaneously learn a different value function for each policy, and exploit the properties of the optimal value functions to induce additional structure on our models. We further use these properties to construct a scheme for off-action-space learning which means our approach may be used in an off-policy setting. Empirically, in our settings, we find our approach to perform better and more consistently than an actor-critic algorithm modeled after , although we do not take on the significant additional computational requirements of population based training in any of our experiments. A number of other works address the problem of generalisation and representation for value functions with large discrete action spaces, without explicitly addressing the ing exploration problem . These approaches typically rely on action representations from prior knowledge. Such representations could be used in combination with our method to construct a hierarchy of action spaces with which to shape exploration. We formalise our problem as a MDP, specified by a tuple < S, A, P, r, γ >. The set of possible states and actions are given by S and A, P is the transition function that specifies the environment dynamics, and γ is a discount factor used to specify the discounted return R = T t=0 γ t r t for an episode of length T. We wish our agent to maximise this return in expectation by learning a policy π that maps states to actions. The state-action value function (Q-function) is given by The optimal Q-function Q * satisfies the Bellman optimality equation: Q-learning uses a sample-based approximation of the Bellman optimality operator T to iteratively improve an estimate of Q *. Q-learning is an off-policy method, meaning that samples from any policy may be used to improve the value function estimate. We use this property to engage Q-learning for off-action-space learning, as described in the next section. We also introduce some notation for restricted action spaces. In particular, for an MDP with unrestricted action space A we define a set of N action spaces A, ∈ {0, . . ., N − 1}. Each action space is a subset of the next: A 0 ⊂ A 1 ⊂... ⊂ A N −1 ⊆ A. A policy restricted to actions A is denoted π (a|s). The optimal policy in this restricted policy class is π * (a|s), and its corresponding action-value and value functions are Q * (s, a) and V * (s) = max a Q * (s, a). Additionally, we define a hierarchy of actions by identifying for every action a ∈ A, > 0 a parent action parent (a) in the space of A −1. Since action spaces are subsets of larger action spaces, for all a ∈ A −1, parent (a) = a, i.e., one child of each action is itself. Simple pieces of domain knowledge are often sufficient to define these hierarchies. For example, a discretised continuous action can identify its nearest neighbour in A −1 as a parent. In Section 5 we describe a possible hierarchy for multi-agent action spaces. One could also imagine using action-embeddings to learn such a hierarchy from data. We build our approach to growing action spaces (GAS) on off-policy value-based reinforcement learning. Q-learning and its deep-learning adaptations have shown strong performance , and admit a simple framework for off-policy learning. A value function for an action space A may be updated with transitions using actions drawn from its own action space, or any more restricted action spaces, if we use an off-policy learning algorithm. The restricted transitions simply form a subset of the data required to learn the value functions of the less restricted action spaces. To exploit this, we simultaneously learn an estimated optimal value functionQ * (s, a) for each action space A, and use samples drawn from a behaviour policy based on a value function for low to directly train the higher value functions. At the beginning of each episode, we sample according to some distribution. The experiences generated in that episode are used to update all of theQ * ≥ (s, a). This off-action-space learning is a type of off-policy learning that enables efficient exploration by restricting it to the low-regime. We sample at the beginning of the episode rather than at each timestep because, if the agent uses a high-action, it may enter a state that is inaccessible for a lower-policy, and we do not wish to force a low-value function to generalise to states that are only accessible at higher. Since data from a restricted action space only supports a subset of the state-action space relevant for the value functions of less restricted action spaces, we hope that a suitable function approximator still allows some generalisation to the unexplored parts of the less restricted state-action space. Note that: This is because each action space is a strict subset of the larger ones, so the agent can always in the worst case fall back to a policy using a more restricted action space. This monotonicity intuitively recommends an iterative decomposition of the value estimates, in whichQ * +1 (s, a) is estimated as a sum ofQ * (s, a) and some positive ∆ (s, a). This is not immediately possible due to the mismatch in the support of each function. However, we can leverage a hierarchical structure in the action spaces when present, as described in Section 3. In this case we can use: This is a task-specific upsampling of the lower-value function to intialise the next value function. BothQ * (s, a) and ∆ (s, a) are learned components. We could further regularise or restrict the functional form of ∆ to ensure its positivity when parent (a) = a. However, we did not find this to be valuable in our experiments, and simply initialised ∆ to be small. The property also implies a modified Bellman optimality equation: The max i< are redundant in their role as conditions on the optimal value function Q *. However, the Bellman optimality equation also gives us the form of a Q-learning update, where the term in the expectation on the RHS is used as an operator that iteratively improves an estimate of Q *. When these estimates are inaccurate, the modified form of the Bellman equation may lead to different updates, allowing the solutions to higher to be bootstrapped from those at lower. We expect that policies with low are easier to learn, and that therefore the correspondingQ * is higher value and more accurate earlier in training than those at high. These high values could be picked up by the extra maximisation in the modified bootstrap, and thereby rapidly learned by the higher-value functions. Empirically however, we find that using this form for the target in our loss function performs no better than just maximising overQ * (s, a). We discuss the choice of target and these in more detail in Section 6.2. By sharing parameters between the function approximators of each Q, we can learn a joint state representation, which can then be iteratively decoded into estimates of Q * for each. This shared embedding can be iteratively refined by, e.g., additional network layers for eachQ * to maintain flexibility along with transfer of useful representations. This simple approach has had great success in improving the efficiency of many multi-task solutions using deep learning . We need to choose a schedule with which to increase the used by the behaviour policy over the course of training. use population based training to choose a mixing parameter on the fly. However, this comes at significant computational cost, and optimises greedily for immediate performance gains. We use a simple linear schedule on a mixing parameter α ∈ [0, N]. Initially α = 0 and we always choose = 0. Later, we pick = α with probability α − α and = α with probability α − α (e.g. if α = 1.1, we choose = 1 with 90% chance and = 2 with 10% chance). This worked well empirically with little effort to tune. Many other strategies exist for tuning a curriculum automatically (such as those explored by), and could be beneficial, at the cost of additional overhead and algorithmic complexity. In cooperative multi-agent control, the full action space allows each of N agents to take actions from a set A agent, ing in an exponentially large action space of size |A agent | N. Random exploration in this action space is highly unlikely to produce sensical behaviours, so growing the action space as we propose is particularly valuable in this setting. One approach would be to limit the actions available to each agent, as done in our discretised continuous control experiments (Section 6.1) and those of. However, the joint action space would still be exponential in N. We propose instead to use hierarchical clustering, and to assign the same action to nearby agents. At the first level of the hierarchy, we treat the whole team as a single group, and all agents are constrained to take the same action. At the next level of the hierarchy, we split the agents into k groups using an unsupervised clustering algorithm, allowing each group to act independently. At each further level, every group is split once again into k smaller groups. In practice, we simply use k-means clustering based on the agent's spatial position, but this can be easily extended to more complex hierarchies using other clustering approaches. To estimate the value function, we compute a state-value scoreV (s), and a group-action delta ∆ (s, a g, g) for each group g at each level. Then, we compute an estimated group-action value for each group, at each level, using a per-group form of. We useQ * −1 (s, ·) =V (s) to initialise the iterative computation, similarly to the dueling architecture of. The estimated value of the parent action is the estimated value of the entire parent group all taking the same action as the child group. At each level we now have a set of group-action values. In effect, a multi-agent value-learning problem still remains at each level, but with a greatly reduced number of agents at low. We could simply use independent Q-learning , but instead choose to estimate the joint-action value at each level as the mean of the group-action values for the groups at that, as in the work of. A less restrictive representation, such as that proposed by , could help, but we leave this direction to future work. A potential problem is that the clustering changes for every state, which may interfere with generalisation as group-actions will not have consistent semantics. We address this in two ways. First, we include the clustering as part of the state, and the cluster centroids are re-initialised from the previous timestep for t > 0 to keep the cluster semantics approximately consistent. Second, we use a functional representation that produces group-action values that are broadly agnostic to the identifier of the group. In particular, we compute a spatially resolved embedding, and pool over the locations occupied by each group. See Figure 2 and Section 6.2 for more details. We investigate two classes of problems that have a natural hierarchy in the action space. First, simple control problems where a coarse action discretisation can help accelerate exploration, and fine action discretisation allows for a more optimal policy. Second, the cooperative multi-agent setting, discussed in Section 5, using large-scale StarCraft micromanagement scenarios. As a proof-of-concept, we look at two simple examples: versions of the classic Acrobot and Mountain Car environments with discretised action spaces. Both tasks have a sparse reward of +1 when the goal is reached, and we make the exploration problem more challenging by terminating episodes Figure 1: Discretised continuous control with growing action spaces. We report the mean and standard error (over 10 random seeds) of the returns during training, with a moving average over the past 20 episodes. A 2 (slow) is an ablation of A 2 that decays at a quarter the rate. with a penalty of -1 if the goal is not reached within 500 timesteps. The normalised remaining time is concatenated to the state so it remains Markovian despite the time limit. There is a further actuation cost of 0.05 a 2. At A 0, the actions apply a force of +1 and −1. At each subsequent A >0, each action is split into two children, one that is the same as the parent action, and the other applying half the force. Thus, there are 2 actions in A. The of our experiments are shown in Figure 1. Training with the lower resolutions A 0 and A 1 from scratch converges to finding the goal, but incurs significant actuation costs. Training with A 2 from scratch almost never finds the goal with -greedy exploration. We also tried decaying the at a quarter of the rate (A 2 slow) without success. In these cases, the policy converges to the one that minimises actuation costs, never finding the goal. Training with a growing action space explores to find the goal early, and then uses this experience to transition smoothly into a solution that finds the goal but takes a slower route that minimises actuation costs while achieving the objective. The real-time strategy game StarCraft and its sequel StarCraft II have emerged as popular platforms for benchmarking reinforcement learning algorithms. Full game-play has been tackled by e.g. , while other works focus on sub-problems such as micromanagement, the low-level control of units engaged in a battle between two armies (e.g.). Efforts to approach the former problem have required some subset of human demonstrations, hierarchical methods, and massive compute scale, and so we focus on the latter as a more tractable benchmark to evaluate our methods. Most previous work on RL benchmarking with StarCraft micromanagement is restricted to maximally 20-30 units (; . In our experiments we focus on much larger-scale micromanagement scenarios with 50-100 units on each side of the battle. To further increase the difficulty of these micromanagement scenarios, in our setting the starting locations of the armies are randomised, and the opponent is controlled by scripted logic that holds its position until any agent-controlled unit is in range, and then focus-fires on the closest enemy. This increases the exploration challenge, as our agents need to learn to find the enemy first, while they hold a strong defensive position. The action space for each unit permits an attack-move or move action in eight cardinal directions, as well as a stop action that causes the unit to passively hold its position. In our experiments, we use k = 2 for k-means clustering and split down to at most four or eight groups. The maximum number of groups in an experiment with A is 2 . Although our approach is designed for off-policy learning, we follow the common practice of using n-step Q-learning to accelerate the propagation of values . Our base algorithm uses the objective of n-step Q-learning from the work of , and collects data from multiple workers into a short queue similarly to. Full details can be found in the Appendix. We propose an architecture to efficiently represent the value functions of the action-space hierarchy. The overall structure is shown in Figure 2. We start with the state of the scenario. Ally units are blue and split into two groups. From the state, features are extracted from the units and map (see Appendix for full details). These features are concatenated with a one-hot representation of the unit's group (for allied agents), and are embedded with a small MLP. A 2-D grid of embeddings is constructed by adding up the unit embeddings for all units in each cell of the grid. The embeddings are passed through a residual CNN to produce a final embedding, which is copied several times and decoded as follows. First, a state-value branch computes a scalar value by taking a global mean pooling and passing the through a 2-layer MLP. Then, for each, a masked mean-pooling is used to produce an embedding for each group at that A by masking out the positions in the spatial embedding where there are no units of that group (5a, 5b, 5c). A single evaluation MLP for each is used to decode this embedding into a group action-score (7a, 7b, 7c). This architecture allows a shared state representation to be efficiently decoded into value-function contributions for groups of any size, at any level of restriction in the action space. We consider two approaches for combining these outputs. In our default approach, described in Section 5, each group's action-value is given by the sum of the state-value and group-action-scores for the group and its parents (8a, 8b). In'SEP-Q', each group's action-value is simply given by the state-value added to the group-action score, i.e.,Q * (s, a g) =V (s) + ∆ (s, a g, g). This is an ablation in which the action-value estimates for restricted action spaces do not initialise the actionvalue estimates of their child actions. Figure 3 presents the of our method, as well as a number of baselines and ablations, on a variety of micromanagement tasks. Our method is labeled Growing Action Spaces GAS, such that GAS will grow from A 0 to A 2. Our primary baselines are policies trained with action spaces A 0 or A 2 from scratch. GAS consistently outperforms both of these variants. Policies trained from scratch on A 2 struggle with exploration, in particular in the harder scenarios where the opponent has a numbers advantage. Policies trained from scratch on A 0 learn quickly, but plateau comparatively low, due to the limited ability of a single group to position effectively. GAS benefits from the efficient exploration enabled by an intialisation at A 0, and uses the data gathered under this policy to efficiently transfer to A 2; enabling a higher asymptotic performance. We also compare against a Mix&Match (MM) baseline using the actor-critic approach of , but adapted for our new multi-agent setting and supporting a third level in the mixture Figure 3: StarCraft micromanagement with growing action spaces. We report the mean and standard error (over 5 random seeds) of the evaluation winrate during training, with a moving average over the past 500 episodes. of policies (A 0, A 1, A 2). We tuned hyperparameters for all algorithms on the easiest, fastesttraining scenario (80 marines vs. 80 marines). On this scenario, MM learns faster but plateaus at the same level as GAS. MM underperforms on all other scenarios to varying degrees. Learning separate value functions for each A, as in our approach, appears to accelerate the transfer learning in the majority of settings. Another possible explanation is that MM may be more sensitive to hyperparameters. We do not use population based training to tune hyperparameters on the fly, which could otherwise help MM adapt to each scenario. However, GAS would presumably also benefit from population based training, at the cost of further computation and sample efficiency. The policies learned by GAS exhibit good tactics. Control of separate groups is used to position our army so as to maximise the number of attacking units by forming a wall or a concave that surrounds the enemy, and by coordinating a simultaneous assault. Figure 5 in the Appendix shows some example learned policies. In scenarios where MM fails to learn well, it typically falls into a local minimum of attacking head-on. In each scenario, we test an ablation GAS: ON-AC that does not use our off-action-space update, instead training each level of the Q-function only with data sampled at that level. This ablation performs somewhat worse on average, although the size of the impact varies in different scenarios. In some tasks, it is beneficial to accelerate learning for finer action spaces using data drawn from the off-action-space policy. In Appendix A.1.1, the same ablation shows significantly worse performance on the Mountain Car task and comparable performance on Acrobot. We present a number of further ablations on two scenarios. The most striking failure is of the'SEP-Q' variant which does not compose the value function as a sum of scores in the hierarchy. It is critical to ensure that values are well-initialised as we move to less restricted action spaces. In the discretised continuous control tasks,'SEP-Q' also underperforms, although less dramatically. The choice of target is less important: performing a max over coarser action spaces to construct the target as described in Section 4.2 does not improve learning speed as intended. One potential reason is that maximising over more potential targets increases the maximisation bias already present in Q-learning . Additionally, we use an n-step objective which combines a partial onpolicy return with the bootstrap target, which could reduce the relative impact of the choice of target. Finally, we experiment with a higher. Unfortunately, asymptotic performance is degraded slightly once we use A 3 or higher. One potential reason is that it decreases the average group size, pushing against the limits of the spatial resolution that may be captured by our CNN architecture. Higher increases the amount of time that there are fewer units than groups, leaving certain groups empty and rendering our masked pooling operation degenerate. We do not see a fundamental limitation that should restrict the further growth of the action space, although we note that most hierarchical approaches in the literature avoid too many levels of depth. For example, only mix between two sizes of action spaces rather than the three we progress through in the majority of our GAS experiments. In this work, we presented an algorithm for growing action spaces with off-policy reinforcement learning to efficiently shape exploration. We learn value functions for all levels of a hierarchy of restricted action spaces simultaneously, and transfer data, value estimates, and representations from more restricted to less restricted action spaces. We also present a strategy for using this approach in cooperative multi-agent control. In discretised continuous control tasks and challenging multiagent StarCraft micromanagement scenarios, we demonstrate empirically the effectiveness of our approach and the value of off-action-space learning. An interesting avenue for future work is to automatically identify how to restrict action spaces for efficient exploration, potentially through meta-optimisation. We also look to explore more complex and deeper hierarchies of action spaces. A.1 DISCRETISED CONTINUOUS CONTROL A.1.1 ADDITIONAL ABLATIONS Here, we present on some additional ablations of GAS on the discretised continous control tasks. SEP-Q performs slightly worse on both tasks, a less dramatic failure than in the StarCraft experiments. These value functions are simpler, and it is easier to learn the new action space's value without relying so much on the previous one. ON-AC performs worse only on Mountain Car, suggesting once again that the significance of this component of the algorithm is somewhat problemdependent. We also test a version that follows the intuition of the'Match' objective of M&M more closely, adapted for the value-based setting: instead of using an adaptive initialisation of each level's Q-function as described in the main text, we use an L2 penalty to'Match' the new level's value function to its parent action, which should have a similar effect. This variant performs similarly here (perhaps slightly worse in the more challenging Mountain Car). For our experiments in discretised continous control, we use a standard DQN trainer with the following parameters. For GAS experiments, we keep the mixing coefficient α = 0 for 25000 environment steps, and then increase it linearly by 1 every 25000 steps until reaching the maximum value. We use γ = 0.998 for our Acrobot experiments, but reduce it to γ = 0.99 for Mountain Car to prevent diverging Q-values. Our model consists of fully-connected ReLU layers, with 128 hidden units for the first and 64 hidden units for all subsequent layers. Two layers are applied as an encoder. Then, for each one layer is applied on the current embedding to produce a new embedding, and an evaluation layer on that embedding produces the Q-values for that level. In these examples, the opponent is always on the right, and the agent controlled by model trained with GAS is on the left. We explore five Starcraft micromanagement scenarios: 50 hydralisks vs 50 hydralisks, 80 marines vs 80 marines, 80 marines vs 85 marines, 60 marines vs 65 marines, 95 zerglings vs 50 marines. In these scenarios, our model controls the first set of units, and the opponent controls the second set. The opponent is a scripted opponent that holds its location until an opposing unit is within range to attack. Then, the opponent will engage in an "attack-closest" behavior, as described in, where each unit individually targets the closest unit to it. Having the opponent remain stationary until engaged makes this a more difficult problem -the agent must find its opponent, and attack into a defensive position, which requires good positions prior to engagement. As mentioned in section 6.2, all of our scenarios require control of a much larger number of units than previous work. The 50 hydralisks and 80v80 marines scenarios are both imbalanced as a of attacking into a defensive position. The optimal strategy for 80 marines vs 85 marines and 60 vs 65 marines requires slightly more sophisticated unit positioning, and the 95 zerglings vs 50 marines scenario requires the most precise positioning. The agent can use the enemy's initial stationary positioning to its advantage by slightly surrounding the opponent in a concave, ensuring that the outermost units are in its attack range, but far enough away to be out of range of the center-most enemy units. Ideally, the timing of the groups in all scenarios should be coordinated such that all units get in range of the opponent at roughly the same point in time. Figure 5 shows how our model is able to exhibit this level of unit control. We use a standard features for the units and map, given by TorchcraftAI 1 For each of the units, the following features are extracted: 1 https://github.com/TorchCraft/TorchCraftAI • Current x, y positions. • Current x, y velocities. • Current hitpoints • Armor and damage values • Armor and damage types • Range versus both ground and air units • Current weapon cooldown • A few boolean flags on some miscellaneous unit attributes Approximate normalization for each feature keep its value approximately between 0-1. For the map, the following features are extracted for each tile in the map: • a one-hot encoding of tile's the ground height (4 channels) • boolean representing or not the given tile is walkable • boolean representing or not the given tile is buildable • and boolean representing or not the given tile is covered by fog of war. The features form a HxW x7 tensor, where our map has height H and width W. We use a frame-skip of 25, approximately 1 second of real time, allowing for reasonably fine-grained control but without making the exploration and credit assignment problems too challenging. We calculate at every timestep the difference in total health points (HP) and number of units for the enemy from the last step, normalised by the total starting HP and unit count. As a reward function, we use the normalised damage dealt, plus 4 times the normalised units killed, plus an additional reward of 8 for winning the scenario by killing all enemy units. This reward function is designed such that the agent gets some reward for doing damage and killing units, but the reward from doing damage will never be greater than from winning the scenario. Ties and timeouts are considered losses. A.3.1 MODEL As described in Section 6.2.2 a custom model architecture is used for Starcraft micromanagement. Each unit's feature vector is embedded to size 128 in step 2 of Figure 2. The grid where the unit features and map features are scattered onto is the size of the Starcraft map of the scenario in walktiles downsampled by a factor of 8. After being embedded, the unit features for ally and enemy units are concatenated with the downsampled map features and sent into a ResNet encoder with four residual blocks (stride 7 padding 3). The output is an embedding of size 64. The decoder uses a mean pooling over the embedding cells as described in Section 6.2.2. Each evaluator is a 2-layer MLP with 64 hidden units and 17 outputs, one for each action. All layers are separated with ReLU nonlinearities. We use 64 parallel actors to collect data in a short queue from which batches are removed when they are consumed by the learner. We use batches of 32 6-step segments for each update. For the Q-learning experiments, we used the Adam optimizer with a learning rate of 2.5 × 10 −4 and = 1 × 10 −4. For the MM baseline experiments, we use a learning rate of 1 × 10 −4, entropy loss coefficient of 8 × 10 −3 and value loss coefficient 0.5. The learning rates and entropy loss coefficient were tuned by random search, training with A 0 from scratch on the 80 marines vs 80 marines scenario with 10 configurations sampled from log uniform(−5, −3) for the learning rate and log uniform(−3, −1) for the entropy loss coefficient. For Q-learning, we use an -greedy exploration strategy, decaying linearly from 1.0 to 0.1 over the first 10000 model updates. We also use a target network that copies the behaviour model's parameters every 200 model updates. We also use a linear schedule to grow the action-space. There is a lead in of 5000 model updates, during which the action-space is held constant at A 0, to prevent the action space from growing when or the policy entropy is too high. The action-space is then grown linearly at a rate of 10000 model updates per level of restriction, so that after 10000 updates, we act entirely at A 1 and after 20000, entirely at A 2. | Progressively growing the available action space is a great curriculum for learning agents | 670 | scitldr |
Recently, researchers have discovered that the state-of-the-art object classifiers can be fooled easily by small perturbations in the input unnoticeable to human eyes. It is known that an attacker can generate strong adversarial examples if she knows the classifier parameters. Conversely, a defender can robustify the classifier by retraining if she has the adversarial examples. The cat-and-mouse game nature of attacks and defenses raises the question of the presence of equilibria in the dynamics. In this paper, we present a neural-network based attack class to approximate a larger but intractable class of attacks, and formulate the attacker-defender interaction as a zero-sum leader-follower game. We present sensitivity-penalized optimization algorithms to find minimax solutions, which are the best worst-case defenses against whitebox attacks. Advantages of the learning-based attacks and defenses compared to gradient-based attacks and defenses are demonstrated with MNIST and CIFAR-10. Recently, researchers have made an unsettling discovery that the state-of-the-art object classifiers can be fooled easily by small perturbations in the input unnoticeable to human eyes BID24 BID7. Following studies tried to explain the cause of the seeming failure of deep learning toward such adversarial examples. The vulnerability was ascribed to linearity BID24, low flexibility BID4, or the flatness/curvedness of decision boundaries BID20, but a more complete picture is still under research. This is troublesome since such a vulnerability can be exploited in critical situations such as an autonomous car misreading traffic signs or a facial recognition system granting access to an impersonator without being noticed. Several methods of generating adversarial examples were proposed BID7 BID19 BID2, most of which use the knowledge of the classifier to craft examples. In response, a few defense methods were proposed: retraining target classifiers with adversarial examples called adversarial training BID24 BID7; suppressing gradient by retraining with soft labels called defensive distillation BID22; hardening target classifiers by training with an ensemble of adversarial examples BID25.In this paper we focus on whitebox attacks, that is, the model and the parameters of the classifier are known to the attacker. This requires a more robust classifier or defense method than simply relying on the secrecy of the parameters as defense. When the classifier parameters are known to an attacker, existing attack methods are very successful at fooling the classifiers. Conversely, when the attack is known to the classifier, e.g., in the form of adversarial examples, one can weaken the attack by retraining the classifier with adversarial examples, called adversarial training. However, if we repeat adversarial sample generation and adversarial training back-to-back, it is observed that the current adversarially-trained classifier is no longer robust to previous attacks (see Sec. 3.1.) To find the classifier robust against the class of gradient-based attacks, we first propose a sensitivitypenalized optimization procedure. Experiments show that the classifier from the procedure is more robust than adversarially-trained classifiers against previous attacks, but it still remains vulnerable to some degrees. This raises the main question of the paper: Can a classifier be robust to all types of attacks? The answer seems to be negative in light of the strong adversarial examples that can be crafted by direct optimization procedures from or BID2. Note that the class of optimization-based attack is very large, as there is no restriction on the adversarial patterns that can be generated except for certain bounds such as l p -norm bounds. The vastness of the optimization-based attack class is a hindrance to the study of the problem, as the defender cannot learn efficiently about the attack class from a finite number of samples. To study the problem analytically, we use a class of learning-based attack that can be generated by a class of neural networks. This class of attack can be considered an approximation of the class of optimization -based attacks, in that the search space of optimal perturbation is restricted to the parameter space of a neural network architecture, e.g., all perturbations that can be generated by fully-connected 3-layer ReLU networks. Similar to what we propose, others have recently considered training neural networks to generate adversarial examples BID21 BID0. While the proposed learning-based attack is weaker than the optimization-based attack, it can generate adversarial examples in test time with only single feedforward passes, which makes real-time attacks possible. We also show that the class of neural-network based attacks is quite different from the the class of gradient-based attacks (see Sec. 4.1.) Using the learning-based attack class, we introduce a continuous game formulation for analyzing the dynamics of attack-defense. The game is played by an attacker and a defender/classifier 1, where the attacker tries to maximize the risk of the classification task by perturbing input samples under certain constraints such as l p -norm bounds, and the defender/classifier tries to adjust its parameters to minimize the same risk given the perturbed inputs. It is important to note that for adversarial attack problems, the performance of an attack or a defense cannot be measured in isolation, but only in pairs of (attack, defense). This is because the effectiveness of an attack/defense depends on the defense/attack it is against. As a two-player game, there may not be a dominant defense that is no less robust than all other defenses against all attacks. However, there is a natural notion of the best defense or attack in the worst case. Suppose one player moves first by choosing her parameters and the other player responds with the knowledge of the first player's move. This is an example of a leader-follower game BID1 for which there are two well-known states, the minimax and the maximin solutions if it is a constant-sum game. To find those solutions empirically, we propose a new continuous optimization method using the sensitivity penalization term. We show that the minimax solution from the proposed method is indeed different from the solution from the conventional alternating descent/ascent and is also more robust. We also show that the strength/weakness of the minimax-trained classifier is different from that of adversarially-trained classifiers for gradient-based attacks. The contributions of this paper are summarized as follows.• We provide a continuous game model to analyze adversarial example attacks and defenses, using the neural network-based attack class as a feasible approximation to a larger but intractable class of optimization-based attacks.• We demonstrate the difficulty of defending against multiple attack types and present the minimax defense as the best worst-case defense methods.• We propose a sensitivity-penalized optimization method (Alg. 1) to numerically find continuous minimax solutions, which is better than alternating descent/ascent. The proposed optimization method can also be used for other minimax problems beyond the adversarial example problem. The proposed methods are demonstrated with the MNIST and the CIFAR-10 datasets. For readability, details about experimental settings and the with CIFAR-10 are presented in the appendix. Making a classifier robust to test-time adversarial attacks has been studied for linear (kernel) hyperplanes BID13, naive Bayes BID3 and SVM BID5, which also showed the game-theoretic nature of the robust classification problems. Since the recent discovery of adversarial examples for deep neural networks, several methods of generating adversarial samples were proposed BID24 BID7 BID19 BID2 as well as several methods of defense BID24 BID7 BID22 BID25. These papers considered static scenarios, where the attack/defense is constructed against a fixed opponent. A few researchers have also proposed using a detector to detect and reject adversarial examples BID16 BID14 BID18. While we do not use detectors in this work, the minimax approach we proposed in the paper can be applied to train the detectors. The idea of using neural networks to generate adversarial samples has appeared concurrently BID0 BID21. Similar to our paper, the two papers demonstrates that it is possible to generate strong adversarial samples by a learning approach. BID0 explored different architectures for the "adversarial transformation networks" against several different classifiers. BID21 proposed "attack learning neural networks" to map clean samples to a region in the feature space where misclassification occurs and "defense learning neural networks" to map them back to the safe region. Instead of prepending the defense layers before the fixed classifier BID21, we retrain the whole classifier as a defense method. However, the key difference of our work to the two papers is that we consider the dynamics of a learning-based defense stacked with a learning-based attack, and the numerical computation of the optimal defense/attack by continuous optimization. The alternating gradient-descent method for finding an equilibrium of a game has gained renewed interest since the introduction of Generative Adversarial Networks (GAN) BID6. However, the instability of the alternating gradient-descent method has been known, and the "unrolling" method BID17 was proposed to speed up the GAN training. The optimization algorithm proposed in the paper has a similarity with the unrolling method, but it is simpler (corresponding to a single-step unrolling) and involves a gradient-norm regularization which can be interpreted intuitively as sensitivity penalization BID8 BID15. Lastly, the framework of minimax risks was also studied in BID9 for the purpose of privacy preservation. We propose a different algorithm in this paper, but we also show that the attack on classification and the attack on privacy are the two sides of the same optimization problem with the opposite goals.3 CAT-AND-MOUSE GAME A classifier whose parameters are known to an attacker is easy to attack. Conversely, an attacker whose sample-generating method is known to a classifier is easy to defend from. In this section, we demonstrate the cat-and-mouse nature of the interaction, using adversarial training (Adv Train) as defense and the fast gradient sign method (FGSM) BID7 and the iterative version (IFGSM) BID11 as attacks. We then show that the equilibrium, if it exists, can be found more efficiently by directly solving a sensitivity-penalized optimization problem. Suppose g is a classifier g: X → Y and l(g(x), y) is a loss function. The FGSM attack generates a perturbed example z(x) given the clean sample x as follows: DISPLAYFORM0 The clean input images we use here are l ∞ -normalized, that is, all pixel values are in the range [−1, 1]. It was argued that the use of true label y in "label leaking" BID12 ), but we use will true labels in the paper for simplicity. For another attack example, the IFGSM attack iteratively refines an adversarial example by the following update DISPLAYFORM1 where the clipping used in this paper is clip x,η (x) min{1, x + η, max{−1, x − η, x}}.Existing attack methods such as FGSM and IFGSM are very effective at fooling the classifier. Table 1 shows that the two methods are able to perfectly fool a convolutional neural network trained with clean images from MNIST. (Details of the classifier architecture and the settings are in the appendix.)On the other hand, these attacks, if known to the classifier, can be weakened by retraining the classifier with the original dataset augmented by adversarial examples with ground-truth labels, known as adversarial training. In this paper we use the 1:1 mixture of the clean and the adversarial samples for adversarial training. Table 2 shows the of adversarial training for different attacks. Defense\Attack No attack FGSM IFGSM η=0.3 η=0.4 η=0.5 η=0.6 η=0.3 η=0.4 η=0.5 η=0.6 No defense 0.006 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 Table 1: Test error rates of FGSM and IFGSM attacks on an undefended convolutional neural network for MNIST. These attacks can cause perfect misclassification for the given range of η. The test error rates for adversarial test examples after training become below 1% indicating nearperfect avoidance. This is in stark contrast with the perfect misclassification of the undefended classifier in Table 1.Defense\Attack No attack FGSM IFGSM η=0.3 η=0.4 η=0.5 η=0.6 η=0.3 η=0.4 η=0.5 η=0.6 Adv train n/a 0.004 0.003 0.003 0.005 0.003 0.003 0.004 0.010 Table 2: Error rates of FGSM and IFGSM attacks on adversarially-trained classifiers for MNIST. This defense can avert the attacks and achieve the error rates of the no-attack case. A question arises as to what would happen if the procedure of 1) adversarial sample generation using the current classifier, and 2) retraining classifier using the current adversarial examples is repeated for many rounds. The answer to this cat-and-mouse game is easy to experiment although time-consuming. Let's denote the attack on the original classifier as FGSM1, and the corresponding retrained classifier as Adv FGSM1. Repeating the procedure above generates the sequence of models FGSM1 → Adv FGSM1 → FGSM2 → Adv FGSM2, etc. FIG0 shows one such trial with 80 + 80 rounds of the procedure. Initially, the attacker achieves near-perfect attacks (i.e., error rate 1), and the defender achieves near-perfect defense (i.e., error rate 0). As the iteration increases, the attacker becomes weaker with error rate 0.5, but the defense is still very successful, and the rate seems to oscillate persistently. While we can run more iterations to see if it converges, this is not a very principled nor efficient approach to find an equilibrium, if it exists. We can perform the cat-and-mouse simulation more efficiently by an optimization approach. Instead of training the classifier fully with adversarial examples and then regenerating adversarial examples, suppose we only update the classifier with a single gradient-descent step then regenerate adversarial examples. To emphasize the parameters u of the classifier/defender g(x; u), let's rewrite the empirical risk of classifying the perturbed data as DISPLAYFORM0 where z(x) denote an FGSM-like attack based on the loss gradient DISPLAYFORM1 and DISPLAYFORM2 ) is the sequence of perturbed examples. In expectation of the attack, the defender should choose u to minimize f (u, Z(u)) where the dependence of the attack on the classifier u is expressed explicitly. If we minimize f using gradient descent DISPLAYFORM3 then from the chain rule, the total derivative DISPLAYFORM4 from FORMULA2 and.Interestingly, this total derivative at the current state coincides with the gradient ∇ u of the following cost DISPLAYFORM5 where γ = ηN. There are two implications. Interpretation-wise, this cost function is the sum of the original risk f and the'sensitivity' term ∂f /∂Z 2 which penalizes abrupt changes of the risk w.r.t. the input. Therefore, u is chosen at each iteration to not only decrease the risk but also to make the classifier insensitive to input perturbation so that the attacker cannot take advantage of large gradients. The idea of minimizing the sensitivity to input is a familiar approach in robustifying classifiers BID8 BID15. Secondly, the new formulation can be implemented easily. The gradient descent update using the seemingly complicated gradient can be replaced by the gradient descent update of. The capability of automatic differentiation BID23 in modern machine learning libraries can be used to compute the gradient of FORMULA7 efficiently. Using this direct approach, we can find the defense parameters u which will be robust to gradient-based attacks. Fig. 2 shows the decrease of test error during training using the this gradient descent approach for MNIST. It only takes a very small fraction of time to reach the final states of the There is also an important difference between the solution of the cat-and-mouse game and the minimizer of. Table 3 shows that the adversarially trained classifier (Adv FGSM1) is robust to both clean data and FGSM1 attack, but is susceptible to FGSM2 attack, displaying the cat-and-mouse nature. The same holds for Adv FGSM2, Adv FGSM3, etc. After 80 rounds of the cat-and-mouse procedure, the classifier Adv FGSM80 becomes robust to FGSM80 as well as moderately robust to other attacks including FGSM81 (=FGSM-curr). However, the classifier Sens FGSM from direct minimization of FORMULA7 is even more robust toward FGSM-curr than Adv FGSM80 and is overall the best. To see the advantage of the sensitivity term in FORMULA7, we also performed the minimization of FORMULA7 without the sensitivity term under the same conditions as Sens FGSM. This optimization method is similar to the method proposed in, referred to as Learning with Adversaries (LWA FGSM). In the table, one can see that Sens FGSM is also better than LWA FGSM overall, although the difference is small. Note that Sens FGSM is better than other adversarially-trained classifiers, it too is still vulnerable to attacks such as FGSM80. This vulnerability raises the question if it is possible to make a classifier robust to any type of attacks, or more practically, robust to at least a large class of attacks. We discuss this issue in the next section. Table 3: Error rates of different attacks on various adversarially-trained classifiers for MNIST. FGSM-curr means the FGSM attack on the specific classifier on the left. Adv FGSM is the classifier adversarially trained with FGSM attacks. Sens FGSM is the of minimizing FORMULA7 by gradient descent. LWA FGSM is the of minimizing FORMULA7 without the gradient-norm term. In this section, we consider the class of optimization-based attack and the class of neural-network based attacks as an approximation of the former. Using the neural-network based attack class, we formulate the attacker-defender dynamics as a game and discuss two types of equilibria -the minimax and the maximin solutions. We present algorithms that generalize the approach presented in the previous section. An attacker z(x): X → X can be more general than a specific class of attacks such as FGSM. Again, let g: X → Y is a classifier parameterized by u and l(g(x; u), y) is a loss function. If time complexity is not an issue, the following optimization-based attack max DISPLAYFORM0 which is also related to the CW attack BID2, can generate strong adversarial examples, where adversarial patterns Z = (z 1, ..., z N) are unrestricted except for the bounds such as z i − x i p ≤ η. The corresponding class of adversarial patterns Z is very large, which in strong but non-generalizable adversarial examples. Non-generalizable means the perturbation z(x) has to be recomputed for every new test sample x. While the class of optimization-based attacks is powerful, its large size makes it difficult to analytically study the optimal defense methods. To make the problem learnable, we restrict the class of patterns Z to that which can be generated by a flexible but manageable class of perturbation {z(·; v) | ∀v ∈ V }, e.g., an autoencoder of a fixed architecture where the parameter v is the network weights. This class is a clearly an approximation to the class of full optimization-based attacks, but is generalizable, i.e., no time-consuming optimization is required in the test phase but only single feedforward passes. The attack network (AttNet), as we call it, can be of any class of appropriate neural networks. Here we use a three-layer fully-connected network with 300 hiddens units per layer in this paper. Different from BID21 or BID0, we feed the label y into the input of the network along with the features x. This is analogous to using the true label y in the original FGSM. While this label input is optional but it can make the training of the attacker network easier. As with other attacks, we impose the l ∞ -norm constraint on z, i.e., z(x) − x ∞ ≤ η. Suppose now f (u, v) is the empirical risk of a classifier-attacker pair where the input x is first transformed by attack network z(x; v) and then fed to the classifier g(z(x; v); u). The attack network can be trained by gradient descent as well. Given a classifier u, we can use gradient descent DISPLAYFORM1 to find an optimal attacker v that maximizes the risk f assuming the classifier u is fixed. Table 4 compares the error rates of the FGSM attacks and the attack network (AttNet). The table shows that AttNet is better than or comparable to FGSM in all cases. In particular, we already observed that the FGSM attack is no more effective against the classifier hardened against gradient-based attacks (Adv FGSM80 or Sens FGSM), but the AttNet can incur significant error (>∼ 0.9) for those hardened defenders. This indicates that the class of learning-based attacks is indeed different from the class of gradient-based attacks. Table 4: Error rates of FGSM vs learning-based attack network (AttNet) on various adversariallytrained classifiers for MNIST. FGSM-curr/AttNet-curr means they are computed/trained for the specific classifier on the leftmost column. Note that FGSM fails to attack hardened networks (Adv FGSM80 and Sens FGSM), whereas AttNet can still attack them successfully. Finally, we consider the dynamics of the pair of classifier-attacker when each player can change its parameters. Given the current classifier u, an optimal whitebox attacker parameter v is the maximizer of the risk DISPLAYFORM0 Consequently, the defender should choose the classifier parameters u such that the maximum risk is minimized u * arg min DISPLAYFORM1 This solution to the continuous minimax problem has a natural interpretation as the best worst-case solution. Assuming the attacker is optimal, i.e., it chooses the best attack from given u, no other defense can achieve a lower risk than the minimax defense u * in. The minimax defense is also a conservative defense. If the attacker is not optimal, and/or if the attack does not know the defense u exactly (as in blackbox attacks), the actual risk can be lower than what the minimax solution f (u *, v * (u *)) predicts. Before proceeding further, we point out that the claims above apply to the global minimizer u * and the maximizer function v * (·), but in practice we can only find local solutions for complex risk functions of deep classifiers and attackers. To solve FORMULA0, we analyze the problem similarly to- FORMULA7 from the previous section. At each iteration, the defender should choose u in expectation of the attack and minimize f (u, v * (u)). We use gradient descent DISPLAYFORM2 where the total derivative DISPLAYFORM3 Since the exact maximizer v * (u) is difficult to find, we only update v incrementally by one (or more) steps of gradient-ascent update DISPLAYFORM4 The ing formulation is closely related to the unrolled optimization BID17 proposed for training GANs, although the latter has a very different cost function f. Using the single update, the total derivative is DISPLAYFORM5 Similar to hardening a classifier against gradient-based attacks by minimizing FORMULA7 at each iteration, the gradient update of u for f (u, v) can be done using the gradient of the following sensitivitypenalized function DISPLAYFORM6 In other words, u is chosen not only to minimize the risk but also to prevent the attacker from exploiting the sensitivity of f to v. The algorithm is summarized in Alg. 1. Note that this algorithm is actually independent of the adversarial example problem, and can be used for other minimax problems as well. In analogy with the minimax problem, we can also consider the maximin solution defined by DISPLAYFORM0 where DISPLAYFORM1 is the minimizer function. Here we are abusing the notations for the minimax solution u *, the maximin solution v *, the minimizer u * (·), and the maximizer v * (·). Similar to the minimax solution, the maximin solution has an intuitive meaning -it is the best worst-case solution for the attacker. Assuming the defender is optimal, i.e., it chooses the best defense from that minimizes the risk f (u, v) given the attack v, no other attack can inflict a higher risk than the maximin attack v *. It is also a conservative attack. If the defender is not optimal, and/or if the defender does not know the attack v exactly, the actual risk can be higher than what the solution f (u * (v *), v * ) predicts. Note that the maximin scenario where the defender knows the attack method is not very realistic but is the opposite of the minimax scenario and provides the lower bound. To summarize, minimax and maximin defenses and attacks have the following inherent properties. Lemma 1. Let u *, v * (u), v *, u * (v) be the solutions of FORMULA0, FORMULA0, FORMULA0, FORMULA0. DISPLAYFORM2 For any given defense u, the max attack v * (u) is the most effective attack. DISPLAYFORM3 Against the optimal attack v * (u), the minimax defense u * is the most effective defense. DISPLAYFORM4 For any given attack v, the min defense u * (v) is the most effective defense. DISPLAYFORM5 Against the optimal defense u * (v), the maximin attack v * is the most effective attack. DISPLAYFORM6 The risk of the best worst-case attack is lower than that of the best worst-case defense. These properties follow directly from the definitions. The lemma helps us to better understand the dependence of defense and attack, and gives us the range of the possible risk values which can be measured empirically. To find maximin solutions, we use the same algorithm (Alg. 1) except that the variables u and v are switched and the sign of f is flipped before the algorithm is called. In addition to minimax and maximin optimization, we also consider as a reference algorithm the alternating descent/ascent method used in GAN training BID6 DISPLAYFORM0 Note that alternating descent/ascent finds local saddle points which are not necessarily minimax or maximin solutions, and therefore its solution will in general be different from the solution from Alg. 1. The difference of the solutions from three optimizations -Minimax, Maximin, and Alternating descent/ascent (Alt) -applied to a common problem, is demonstrated in FIG2. The figure shows the test error over the course of optimization starting from random initializations. One can see that Minimax (top blue curves) and Alt (middle green curves) converge to different values suggesting the learned classifiers will also be different. Table 5 compares the robustness of the classifiers trained by Minimax and Alt against the AttNet attack (1st/2nd rows and 2nd column for each η.) Minimax defense is more robust than Alt defense at η = 0.3 (0.020 vs 0.104) and at η = 0.4 (0.552 vs 0.873). For larger η's, both are unusably vulnerable. Different performance of the two classifiers implies that the minimax solution found by Alg. 1 is different from the local saddle point found by alternating descent/ascent. In addition, against FGSM attacks, Minimax is moderately robust (0.218 -0.342) despite that the classifiers are not specifically trained against gradient-based attacks. In contrast, Sens FGSM is very vulnerable (0.902 -1.000) against AttNet which we have already observed. This suggests that the class of AttNet attacks and the class of gradient-based attacks are indeed different, and the former class is larger than the latter. Table 5: Error rates of Minimax-, Alt-, and adversarially-trained (Sens FGSM) classifiers for MNIST. Minimax is overall better than Alt against AttNet-curr, and is also moderately robust against the out-of-class attack (FGSM-curr).Lastly, the adversarial examples generated by various attacks in the paper have diverse patterns and are shown in FIG4 of the appendix. We discuss some limitations of the framework and also propose an extension. Ideally, a defender should find a robust classifier against the worst attack from a very large class of attacks such as optimization-based attacks. However, it is difficult to train classifiers against attacks from a large class. On the other hand, if the class is too small, then the worst attack from that class is not representative of all possible worst attacks, and therefore the minimax defense found will not be robust to out-of-class attacks. The trade-off seems inevitable. It is, however, possible to build a defense against multiple specific types of attacks. Suppose z 1 (u),..., z m (u) are m different types of attacks, e.g., z 1 =FGSM, z 2 =IFGSM, etc. The minimax defense for the combined attack is the solution to the mixed continuous-discrete problem DISPLAYFORM0 Additionally, suppose z m+1 (u, v),..., z m+n (u, v) are n different types of learning-based attacks, e.g., z m+1 =2-layer dense net, z m+2 =5-layer convolutional nets, etc. The minimax defense against the mixture of multiple fixed-type and learning-based attacks can be found by solving DISPLAYFORM1 Due to the huge computational demand to solve, we leave it as a future work. Lastly, we discuss a bigger picture of the game between adversarial players. The minimax optimization arises in the leader-follower game BID1 with the constant sum constraint. The leader-follower setting makes sense because the defense (=classifier parameters) is often public knowledge and the attacker exploits the knowledge. Interestingly, the problem of the attack on privacy BID9 has a very similar formulation as the adversarial attack problem, different only in that the classifier is an attacker and the data perturbator is a defender. In the problem of privacy preservation against inference, the defender is a data transformer z(x) (parameterized by u) which perturbs the raw data, and the attacker is a classifier (parameterized by v) who tries to extract sensitive information such as identity from the perturbed data such as online activity of a person. The transformer is the leader, such as when the privacy mechanism is public knowledge, and the classifier is the follower as it attacks the given perturbed data. The risk for the defender is therefore the accuracy of the inference of sensitive information measured by −E[l(z(x; u), y; v)]. Solving the minimax risk problem (min u max v −E[l(z(x; u), y; v)]) gives us the best worst-case defense when the classifier/attacker knows the transformer/defender parameters, which therefore gives us a robust data transformer to preserve the privacy against the best inference attack (among the given class of attacks.) On the other hand, solving the maximin risk problem (max v min u −E[l(z(x; u), y; v)]) gives us the best worst-case classifier/attacker when its parameters are known to the transformer. As one can see, the problems of adversarial attack and privacy attack are two sides of the same coin which can be addressed by similar frameworks and optimization algorithms. In this paper, we present a continuous game formulation of adversarial attacks and defenses using a learning-based attack class implemented by neural networks. We show that this class of attacks is quite different from the gradient-based attacks. While a classifier robust to all types of attack may yet be an elusive goal, the minimax defense against the neural network-based attack class is well-defined and practically achievable. We show that the proposed optimization method can find minimax defenses which are more robust than adversarially-trained classifiers and the classifiers from simple alternating descent/ascent. We demonstrate these with MNIST and CIFAR-10. The architecture of the MNIST classifier is similar to the Tensorflow model 2, and is trained with the following hyperparameters: {Batch size = 128, optimizer = AdamOptimizer with λ = 10 −4, total # of iterations=50,000.}The attack network has three hidden fully-connected layers of 300 units, trained with the following hyperparameters: {Batch size = 128, dropout rate = 0.5, optimizer = AdamOptimizer with 10 −3, total # of iterations=30,000.} For minimax, alt, and maximin optimization, the total number of iteration was 100,000. The sensitivity-penalty coefficient of γ = 1 was used in Alg. 1. We preprocess the CIFAR-10 dataset by removing the mean and normalizing the pixel values with the standard deviation of all pixels in the image. It is followed by clipping the values to ±2 standard deviations and rescaling to [−1, 1]. The architecture of the CIFAR classifier is similar to the Tensorflow model 3 but is simplified further by removing the local response normalization layers. With the simple structure, we attained ∼ 78% accuracy with the test data. The classifier is trained with the following hyperparameters: {Batch size = 128, optimizer = AdamOptimizer with λ = 10 −4, total # of iterations=100,000.}The attack network has three hidden fully-connected layers of 300 units, trained with the following hyperparameters: {Batch size = 128, dropout rate = 0.5, optimizer = AdamOptimizer with σ = 10 −3, total # of iterations=30,000.} For minimax, alt, and maximin optimization, the total number of iteration was 100,000. The sensitivity-penalty coefficient of γ = 1 was used in Alg. 1.In the rest of the appendix, we repeat all the experiments with the MNIST dataset using the CIFAR-10 dataset. Table 8: Error rates of different attacks on various adversarially-trained classifiers for CIFAR-10. FGSM-curr means the FGSM attack on the specific classifier on the leftmost column. Adv FGSM is the classifier adversally trained with FGSM attacks. Sens FGSM is the of minimizing the sensitivity penalty. LWA FGSM is the of minimizing FORMULA7 Table 9: Error rates of FGSM vs learning-based attack network (AttNet) on various adversariallytrained classifiers for CIFAR-10. FGSM-curr/AttNet-curr means they are computed/trained for the specific classifier on the leftmost column. Note that FGSM fails to attack against the'hardened' networks (Adv FGSM80 and Sens FGSM), but AttNet can still attack them successfully. | A game-theoretic solution to adversarial attacks and defenses. | 671 | scitldr |
Supervised learning with irregularly sampled time series have been a challenge to Machine Learning methods due to the obstacle of dealing with irregular time intervals. Some papers introduced recently recurrent neural network models that deals with irregularity, but most of them rely on complex mechanisms to achieve a better performance. This work propose a novel method to represent timestamps (hours or dates) as dense vectors using sinusoidal functions, called Time Embeddings. As a data input method it and can be applied to most machine learning models. The method was evaluated with two predictive tasks from MIMIC III, a dataset of irregularly sampled time series of electronic health records. Our tests showed an improvement to LSTM-based and classical machine learning models, specially with very irregular data. An irregularly (or unevenly) sampled time series is a sequence of samples with irregular time intervals between observations. This class of data add a time sparsity factor when the intervals between observations are large. Most machine learning methods do not have time comprehension, this means they only consider observation order. This makes it harder to learn time dependencies found in time series problems. To solve this problem recent work propose models that are able to deal with such irregularity (; ; ;), but they often rely on complex mechanisms to represent irregularity or to impute missing data. In this paper, we introduce a novel way to represent time as a dense vector representation, which is able to improve the expressiveness of irregularly sampled data, we call it Time Embeddings (TEs). The proposed method is based on sinusoidal functions discretized to create a continuous representation of time. TEs can make a model capable of estimating time intervals between observations, and they do so without the addition of any trainable parameters. We evaluate the method with a publicly available real-world dataset of irregularly sampled electronic health records called MIMIC-III . The tests were made with two tasks: a classification task (in-hospital mortality prediction) and a regression (length of stay). To evaluate the impact of time representation in the data expressiveness we used LSTM and SelfAttentive LSTM models. Both are common RNN models that have been reported to achieve great performance in several time series classification problems, and specifically with the MIMIC-III dataset (; ; ;). We also evaluated simpler models such as linear and logistic regression and a shallow Multi Layer Perceptron. All models were evaluated with and without TEs to asses possible improvements. The problem in focus of this work is how can a machine learning method learn representations from irregularly sampled data. Irregularity is found in many different areas, as electronic health records , climate science , ecology (Clark & Bjørnstad, 2004), and astronomy . Some works deal with irregularity as a missing data problem. With time axis discretization into fixed non-overlapping intervals, those with no observations are then said to contain missing values. This approach was taken by , and. Lipton showed how binary indicators of missingness and observation time delta can improve Recurrent Neural Network based models better than imputation, even with the sparsity of binary masks. Despite the improvement an issue about these methods is missing the potential of how the observation time can be informative. More recently introduced a neural network model capable to learning how to interpolate missing data and avoid time discretization, by turning a irregularly sampled time series into a regular one. also proposed a method to improve discretization by doing a data augmentation based on temporalclustering. Another approach is to make complex models capable of dealing with irregularities. The work of describes a GRU (Gated Recurrent Unit) model called GRU-D. It makes use of binary missing indicators and observation time delta as an input data and incorporates them into GRU gates to control a decay rate of missing data. Bang et al. proposed a similar method using LSTM cell states to improve the decay concept. The concept proposed in this work is similar to and , as they propose the use of an additional input to describe observation time deltas. But instead of using time intervals we are proposing a way to describe the exact time moment using continuous cyclic functions. This way it is possible to calculate with a linear operation the time between any two irregular observations without the need of a cumulative sum over all intermediate data, also avoiding fixed-length time discretization and interpolation noise. Another difference is that Time Embeddings are dense representations that avoid unnecessary sparsity from missingness masking. The concept of Positional Embeddings (PE) was first introduced at where the author used vectors to represent word positions in a sentence. It was initially proposed to improve Convolutional Neural Network ability to handle temporal data. As a CNN do not consider order the PE was introducing numerical representation of order into embedding latent space. The Transformer network brought back the PE to improve a neural network based only on attention modules. The model had the same issue with order modeling as it contains no recurrence, they propose a set of sinusoidal functions discretized by each relative input position. The equations described above have a pos variable to indicate position, i the dimension and d model is the dimension of original embedding space. This way each dimension corresponds to a sinusoidal and the model is able to learn relative positions, as argued by the authors "since for any fixed offset k, P E pos+k) can be represented as a linear function of P E pos) " . The total dimensions of the positional embedding is defined by d model. Each wavelength form a geometric progression from 2π to 10000 · π. The biggest wavelength defines the maximum number of inputs, if a position is higher than 10000, it will start to be redundant. Inspired by Transformer position representation we propose a positional embedding for irregular positions. discretize sinusoidal functions based on positions, it is possible to discretize it based on irregular hour times or dates. Applying these time descriptors to a irregularly sample series can make the own data be time representative. To do it we redefine the equations based on irregular timestamps. Instead of a position indicator there is a time variable, which is continuous. The dimension of TEs (d T E) can parameterized and a maxtime defines a maximum time that can be represented. The relation between maximum time and TE dimension can be a limiting factor, as the maximum time increase the distance between TEs becomes smaller. To avoid this problem it is possible to increase TE dimensionality or set a reasonable maximum time. The main pros of using TEs can be summarized as: • Do not need any optimizable parameter, making it a model-free choice to deal with irregularity. • Time delta can be linearly computed between two TEs, possibly improving long term dependencies recognition. • All TEs have the same norm, avoiding big values as it is possible to happen with time delta descriptors when interval between observations are big. We evaluate the proposed algorithms on two benchmark tasks: in-hospital mortality and length of stay prediction. Booth tasks with the publicly available MIMIC-III dataset . The following section we will briefly describe the data acquisition and prepossessing used, followed by the test and discussion. To assess the method performance we used the MIMIC-III benchmark dataset following the benchmark defined by Harutyunyan et al. (2017; . With the available code we extracted sequences from in-hospital stays with first 48 hours and split into training and testing set. This in a dataset with 17, 903 training samples and 3, 236 test samples for in-hospital mortality after 48 hours task and 35, 344/6, 225 for length of stay after 24 hours. The dataset contains 18 variables with real values and five categorical. We did our own normalization of real variables to zero mean and unit variance, categorical variables are represented with one-hot encoding. At the length of stay task we also change labels from hours to days to avoid large outputs, to report we change it back to hours. To make the dataset even more irregular we removed randomly part of observed test data. By doing this we artificially create bigger time gaps to re-evaluate the models with an increased irregularity. All models was trained with PyTorch on a P100, with batch size of 100 and AdamW optimizer with amsgrad . We performed a five fold cross-validation with 10 runs on each fold. The model with best validation performance (AUC for in-hospital mortality and Mean Absolute Error for length of stay) was selected to compose the average performance for test set. We report the mean and standard error of evaluation measures in test set. To have a baseline we compared TEs primarily with binary masking with time interval indicators, as reported to have a good performance with RNNs in . It was compared with the proposed method with a regular LSTM and a Self-Attentive LSTM, as RNNs are reported to achieve best with the evaluated tasks (; ;). TEs was tested with dimension (d T E) of 32 and maximum time of 48 hours. As binary masking dimension and concatenated TE increase input dimension we adjusted the LSTM hidden size to keep close the number of parameters of as describe at Table 1. All neural models are connected to a 3 layers Multi-Layer Perceptron (MLP) with 32, 32, and 16 neurons. The last layers is a two neurons softmax for in-hospital mortality and one linear, with ReLU, for length of stay. The Self Attention was implemented as introduced in with the only difference of using uni-directional LSTMs. The attention size (d a) was 32 and the number of attentions (r) was 8. We also used the penalization term of C = 1 As the MIMIC III data are composed by multivariated series and we assume that Time Embeddings (TEs) should not be combined directly. So, we propose to use TEs in two different ways, as additional inputs, replacing missing mask, and as a latent space transformation, by adding TEs to the RNN output hidden state. To have also a baseline of non-recurrent models and assess the TE effect on them, we tested a four layer MLP and Linear/Logistic regression (linear for length of stay and logistic for in-hospital mortality task). Figure 3: Evaluation of models at in-hospital mortality with observed data from 10% to 100% Results for in-hospital mortality shows that self-attention seems to deteriorate the vanilla LSTM performance, but when added the TEs it got improved sufficiently to surpass it and achieve our better average . In the length of stay task TEs achieved better , especially with bigger gaps at the reduced data test. TEs improved LSTM average error, but a slight worse explained variance, were binary masking had a better performance. At Figure 3 and 4 we can see the performance of models when we randomly remove observed data from 100% to 10%. With length of stay task the LSTM with TE concatenated have a overall smaller absolute error than vanilla LSTM, being surpassed only by the binary mask. At in-hospital mortality we see a similar performance with TE SA-LSTM and LSTM with binary masking. With non-recurrent models it is possible to observe how TEs does not rely on recurrence. It improved both linear/logistic regression and MLP. This paper propose a novel method to represent hour time or dates as dense vectors to improve irregularly sampled time series. It was evaluated with two different approaches and evaluated in two tasks from the MIMIC III dataset. Our method showed some improvement with most models tested, including recurrent neural networks and classic machine learning methods. Despite being outperformed by binary masking in some tests we believe TEs can still be an viable option. Specially to very irregular time series and high dimensional data, were TEs can be applied by addition without increasing the input dimensionality. We see a promising future for the method proposed. We expect to extend it to improve other types of irregular time-continuous data and also evaluate how can TE improve recent models proposed for irregularly time series, like the GRU-D , interpolation networks and Temporal-Clustering Regularization . The code for TEs reported will be publicly available in the future. | A novel method to create dense descriptors of time (Time Embeddings) to make simple models understand temporal structures | 672 | scitldr |
Community detection in graphs can be solved via spectral methods or posterior inference under certain probabilistic graphical models. Focusing on random graph families such as the stochastic block model, recent research has unified both approaches and identified both statistical and computational detection thresholds in terms of the signal-to-noise ratio. By recasting community detection as a node-wise classification problem on graphs, we can also study it from a learning perspective. We present a novel family of Graph Neural Networks (GNNs) for solving community detection problems in a supervised learning setting. We show that, in a data-driven manner and without access to the underlying generative models, they can match or even surpass the performance of the belief propagation algorithm on binary and multiclass stochastic block models, which is believed to reach the computational threshold in these cases. In particular, we propose to augment GNNs with the non-backtracking operator defined on the line graph of edge adjacencies. The GNNs are achieved good performance on real-world datasets. In addition, we perform the first analysis of the optimization landscape of using (linear) GNNs to solve community detection problems, demonstrating that under certain simplifications and assumptions, the loss value at any local minimum is close to the loss value at the global minimum/minima. Graph inference problems encompass a large class of tasks and domains, from posterior inference in probabilistic graphical models to community detection and ranking in generic networks, image segmentation or compressed sensing on non-Euclidean domains. They are motivated both by practical applications, such as in the case of PageRank, and also by fundamental questions on the algorithmic hardness of solving such tasks. From a data-driven perspective, these problems can be formulated in unsupervised, semi-supervised or supervised learning settings. In the supervised case, one assumes a dataset of graphs with labels on their nodes, edges or the entire graphs, and attempts to perform node-wise, edge-wise and graph-wise classification by optimizing a loss over a certain parametric class, e.g. neural networks. Graph Neural Networks (GNNs) are natural extensions of Convolutional Neural Networks to graph-structured data, and have emerged as a powerful class of algorithms to perform complex graph inference leveraging labeled data (; BID3 (and references therein). In essence, these neural networks learn cascaded linear combinations of intrinsic graph operators interleaved with node-wise (or edge-wise) activation functions. Since they utilize intrinsic graph operators, they can be applied to varying input graphs, and they offer the same parameter sharing advantages as their CNN counterparts. In this work, we focus on community detection problems, a wide class of node classification tasks that attempt to discover a clustered, segmented structure within a graph. The algorithmic approaches to this problem include a rich class of spectral methods, which take advantage of the spectrum of certain operators defined on the graph, as well as approximate message-passing methods such as belief propagation (BP), which performs approximate posterior inference under predefined graphical models . Focusing on the supervised setting, we study the ability of GNNs to approximate, generalize or even improve upon these class of algorithms. Our motivation is two-fold. On the one hand, this problem exhibits algorithmic hardness on some settings, opening up the possibility to discover more efficient algorithms than the current ones. On the other hand, many practical scenarios fall beyond pre-specified probabilistic models, requiring data-driven solutions. We propose modifications to the GNN architecture, which allow it to exploit edge adjacency information, by incorporating the non-backtracking operator of the graph. This operator is defined over the edges of the graph and allows a directed flow of information even when the original graph is undirected. It was introduced to community detection problems by , who propose a spectral method based on the non-backtracking operator. We refer to the ing GNN model as a Line Graph Neural Network (LGNN). Focusing on important random graph families exhibiting community structure, such as the stochastic block model (SBM) and the geometric block model (GBM), we demonstrate improvements in the performance by our GNN and LGNN models compared to other methods, including BP, even in regimes within the so-called computational-to-statistical gap. A perhaps surprising aspect is that these gains can be obtained even with linear LGNNs, which become parametric versions of power iteration algorithms. We want to mention that besides community detection tasks, GNN and LGNN can be applied to other node-wise classification problems too. The reason we are focusing on community detection problems is that this is a relatively well-studied setup, for which different algorithms have been proposed and where computational and statistical thresholds have been studied in several scenarios. Moreover, synthetic datasets can be easily generated for community detection tasks. Therefore, we think it is a nice setup for comparing different algorithms, besides its practical values. The good performances of GNN and LGNN motivate our second main contribution: the analysis of the optimization landscape of simplified and linear GNN models when trained with planted solutions of a given graph distribution. Under reparametrization, we provide an upper bound on the energy gap controlling the energy difference between local and global minima (or minimum). With some assumptions on the spectral concentration of certain random matrices, this energy gap will shrink as the size of the input graphs increases, which would mean that the optimization landscape is benign on large enough graphs. • We propose an extension of GNNs that operate on the line graph using the non-backtracking operator, which yields improvements on hard community detection regimes.• We show that on the stochastic block model we reach detection thresholds in a purely data-driven fashion, in the sense that our improve upon belief propagation in hard SBM detection regimes, as well as in the geometric block model.• We perform the first analysis of the learning landscape of GNN models, showing that under certain simplifications and assumptions, they exhibit a form of "energy gap", where local mimima are confined in low-energy configurations.• We show that our model can perform well on community detection problems with real-world datasets. We are interested in a specific class of node-classification tasks in which given an input graph G = (V, E), a labeling y: V → {1, . . ., C} that encodes a partition of V into C communities is to be predicted at each node. We assume that a training set {(G t, y t)} t≤T is given, with which we train a model that predictsŷ = Φ(G, θ) by minimizing DISPLAYFORM0 Since y encodes a partition of C groups, the specific label of each node is only important up to a global permutation of {1, . . ., C}. Section 4.3 describes how to construct loss functions with such a property. A permutation of the observed nodes translates into the same permutation applied to the labels, which justifies models Φ that are equivariant to permutations. Also, we are interested in inferring properties of community detection algorithms that do not depend on the specific size of the graphs 1. We therefore require that the model Φ accepts graphs of variable size for the same set of parameters, similar to sequential RNN or spatial CNN models. GNN was first proposed in; Scarselli et al.. Bruna et al. (2013 generalize convolutional neural networks on general undirected graphs by using the graph Laplacian's eigenbasis. This was the first time the Laplacian operator was used in a neural network architecture to perform classification on graph inputs. consider a symmetric Laplacian generator to define a multiscale GNN architecture, demonstrated on classification tasks. use a similar generator as effective embedding mechanisms for graph signals and applies it to semi-supervised tasks. This is the closest application of GNNs to our current contribution. However, we highlight that semi-supervised learning requires bootstrapping the estimation with a subset of labeled nodes, and is mainly interested in generalization within a single, fixed graph. In comparison, our setup considers community detection across a distribution of input graphs and assumes no initial labeling on the graphs in the test dataset except for the adjacency information. There have been several extensions of GNNs by modifying their non-linear activation functions, parameter sharing strategies, and choice of graph operators (; ; ;). In particular, interpret the GNN architecture as learning an approximate message-passing algorithm, which extends the learning of hidden representations to graph edges in addition to graph nodes. relate adjacency learning with attention mechanisms, and propose a similar architecture in the context of machine translation. Another recent and related piece of work is by , who propose a generalization of GNN that captures high-order node interactions through covariant tensor algebra. Our approach to extend the expressive power of GNN using the line graph may be seen as an alternative to capture such high-order interactions. Our energy landscape analysis is related to the recent paper by , which establishes an energy bound on the local minima arising in the optimization of ResNets. In our case, we exploit the properties of the community detection problem to produce an energy bound that depends on the concentration of certain random matrices, which one may hope for as the size of the input graphs increases.'s work on data regularization for clustering and rank estimation is also motivated by the success of using Bethe-Hessian-like perturbations to improve spectral methods on sparse networks. It finds good perturbations via matrix perturbations and also has successes on the stochastic block model. Yang & Leskovec (2012a) curate benchmark datasets for community detection and quantify the quality of these datasets, while Yang & Leskovec (2012b) develop new algorithms for community detection by fitting to the networks the Affliation Graph Model (AGM), a generative model for graphs with overlapping communities. This section introduces our GNN architectures that include the power graph adjacency (Section 4.1) and its extension to line graphs using the non-backtracking operator (Section 4.2), as well as the design of losses invariant to global label permutations (Section 4.3). The Graph Neural Network (GNN), introduced in and later simplified in;; , is a flexible neural network architecture based on local operators on a graph G = (V, E). Given a state vector x ∈ R |V |×b on the vertices of Figure 1. Overview of the architecture of LGNN (Section 4.2). Given a graph G, we construct its line graph L(G) with the non-backtracking operator FIG0 ). In every layer, the states of all nodes in G and L(G) are updated according to. The final states of nodes in G are used to predict node-wise labels, and the trainining is performed end-to-end using standard backpropagation with a label permutation invariant loss (Section 4.3).G, we consider intrinsic linear operators of the graph that act locally on x, which can be represented as |V |-by-|V | matrices. For example, the adjacency matrix A is defined entry-wise by DISPLAYFORM0, D is a diagonal matrix with D ii being the number of edges that the ith node has. We can also define power graph adjacency matrices as A (j) = min(1, A 2 j), which encodes 2 j -hop neighborhoods into a binary graph. Finally, there is also the identity matrix, I. Given such a family of operators for each graph, DISPLAYFORM1 where DISPLAYFORM2 are trainable parameters and ρ(·) is a point-wise nonlinear activation function, chosen in this work to be the ReLU function, i.e. ρ(z) = max(0, z) for z ∈ R. Then we define DISPLAYFORM3 as the concatenation of z (k+1) and z (k+1). The layer thus includes linear "residual connections" via z (k), both to ease with the optimization when using large number of layers and to increase the expressivity of the model by enabling it to perform power iterations. Since the spectral radius of the learned linear operators in can grow as the optimization progresses, the cascade of GNN layers can become unstable to training. In order to mitigate this effect, we consider spatial batch normalization at each layer. 2 In our experiments, the initial states are set to be the degrees of the nodes, i.e., DISPLAYFORM4 satisfies the permutation equivariance property required for community detection: Given a permutation π among the nodes in the graph, Φ(G π, Πx ) = ΠΦ(G, x ), where Π is the permutation matrix associated with π. Analogy between GNN and power iterations In our setup, spatial batch normalization not only prevents gradient blowup, but also performs the orthogonalisation relative to the constant vector, which reinforces the analogy with the spectral methods for community detection, some of which is described in B.1. In essence, in certain regimes, the eigenvector of A corresponding to its second largest eigenvalue and the eigenvector of the Laplacian matrix, L = D − A, corresponding to its second smallest eigenvalue (i.e. the Fiedler vector), are both correlated with the community structure of the graph. Thus, spectral methods for community detection performs power iterations on these matrices to obtain the eigenvectors of interest and predicts the community structure based on them. For example, to extract the Fiedler vector of a matrix M, whose eigenvector corresponding to the smallest eigenvalue is known to be v, one performing power iterations onM DISPLAYFORM5 If v is a constant vector, which is the case for L, then the normalization above is precisely performed within the spatial batch normalization step. By incorporating a family of operators into the neural network framework, the GNN can not only approximate but also go beyond power iterations. As explained in Section B.1, the Krylov subspace generated by the graph Laplacian is not sufficient to operate well in the sparse regime, as opposed to the space generated by {I, D, A}. The expressive power of each layer is further increased by adding multiscale versions of A, although this benefit comes at the cost of computational efficiency, especially in the sparse regime. The network depth is chosen to be of the order of the graph diameter, so that all nodes obtain information from the entire graph. In sparse graphs with small diameter, this architecture offers excellent scalability and computational complexity. Indeed, in many social networks diameters are constant (due to hubs), or log(|V |), as in the stochastic block model in the constant average degree regime . This in a model with computational complexity on the order of |V | log(|V |), making it amenable to large-scale graphs. For graphs with few cycles, posterior inference can be remarkably approximated by loopy belief propagation . As described in Section B.2, the message-passing rules are defined over the edge adjacency graph (see equation 57). Although its second-order approximation around the critical point can be efficiently approximated with a power method over the original graph, a datadriven version of BP requires accounting for the non-backtracking structure of the message-passing. In this section we describe an upgraded GNN model that exploits the non-backtracking structure. and so |V L | = 2|E|. The non-backtracking operator on the line graph is represented by a matrix B ∈ R 2|E|×2|E| defined as DISPLAYFORM0 This operator enables the directed propagation of information through on the line graph and was first proposed in the context of community detection on sparse graphs in. The message-passing rules of BP can be expressed as a diffusion in the line graph L(G) using this non-backtracking operator, with specific choices of activation function that turn product of beliefs into sums. A natural extension of the GNN architecture presented in Section 4.1 is thus to consider a second GNN defined on L(G), where B and D B = diag(B1) play the role of the adjacency and the degree matrices, respectively. This effectively defines edge features that are updated according to the edge adjacency of G. Edge and node features communicate at each layer using the edge indicator matrices P m, P d ∈ {0, 1} |V |×2|E|, defined as P mi,(i→j) = 1, P dj,(i→j) = 1, P di,(i→j) = 1, P dj,(i→j) = −1 and 0 otherwise. With the skip linear connections defined similarly, the ing model becomes DISPLAYFORM1 where DISPLAYFORM2 and the trainable parameters are θ i, θ i, θ i ∈ R b k ×b k+1 and θ i ∈ R b k+1 ×b k+1. We call such a model a Line Graph Neural Network (LGNN).In our experiments, we set x = deg(A) and y = deg(B). For graph families with constant average degree d (as |V | grows), the line graph has size 2|E| ∼ O(d|V |), and is therefore feasible from the computational point of view. Furthermore, the construction of line graphs can be iterated to generate L(L(G)), L(L(L(G))), etc. to yield a graph hierarchy, which could capture high-order interactions among nodes of G. Such an hierarchical construction is related to other recent efforts to generalize GNNs .Relationship between LGNN and edge feature learning approaches Several authors have proposed combining node and edge feature learning. BID2 introduce edge features over directed and typed graphs, but does not discuss the undirected case. Kearnes et al. FORMULA2; learn edge features on undirected graphs using f e = g(x(i), x(j)) for an edge e = (i, j), where g is commutative on its arguments. Finally, Velickovic et al. FORMULA2 learns directed edge features on undirected graphs using stochastic matrices as adjacencies (which are either row or column-normalized). However, we are not aware of works that consider the edge adjacency structure provided by the non-backtracking matrix on the line graph. With non-backtracking matrix, our LGNN can be interpreted as learning directed edge features from an undirected graph. Indeed, if each node i contains two distinct sets of features x s (i) and x r (i), the non-backtracking operator constructs edge features from node features while preserving orientation: For an edge e = (i, j), our model is equivalent to constructing oriented edge features f i→j = g(x s (i), x r (j)) and f j→i = g(x r (i), x s (j)) (where g is trainable and not necessarily commutative on its arguments) that are subsequently propagated through the graph. Constructing such local oriented structure is shown to be important for improving performance in the next section. For comparison, we also define a linear LGNN (LGNN-L) as the the LGNN that drops the nonlinear activation functions ρ in, and a symmetric LGNN (LGNN-S) as the LGNN whose line graph is defined on the undirected edges of the original graph: In LGNN-S, two edges of G are connected in the line graph if and only if they share one common node; also, F = {P}, with P ∈ R |V |×|E| defined as P i,(j→k) = 1 if i = j or k and 0 otherwise. Let C = {1, . . ., C} denote the set of all community labels, and consider first the case where communities do not overlap. By applying the softmax function at the end, we interpret the cth dimension of the output of the models at node i as the conditional probability that the node belongs to community c, o i,c = p(y i = c |θ, G). Let G = (V, E) be the input graph and let y i be the ground truth community label of node i. Since the community structure is defined up to global permutations of the labels, we can define a loss function with respect to a given graph instance as DISPLAYFORM0 where S C denotes the permutation group of C elements. This is essentially taking the the cross entropy loss minimized over all possible permutations of C. In our experiments, we considered examples with small numbers of communities such as 2 and 5. In general scenarios where C is much larger, the evaluation of the loss function can be impractical due to the minimization over S C. A possible solution is to randomly partition C labels intoC groups, and then to marginalize the model outputs DISPLAYFORM1,c ∈C, and and finally use (θ) = inf π∈SC − i∈V logō i,π(ȳi) as an approximate loss value, which only involves a permutation group of size (C!).Finally, if communities may overlap, we can enlarge C to include subsets of communities and define the permutation group accordingly. For example, if there are two overlapping communities, we let C = {{1}, {2}, {1, 2}}, and only allow the permutation between 1 and 2 when computing the loss function as well as the overlap to be introduced in Section 6. As described in the numerical experiments, we found that the GNN models without nonlinear activations already provide substantial gains relative to baseline (non-trainable) algorithms. This section studies the optimization landscape of linear GNNs. Despite defining a non-convex objective, we prove that the landscape is "benign" under certain further simplifications, in the sense that the local minima are confined in sublevel sets of low energy. For simplicity, we consider only the binary c = 2 case where we replace the node-wise binary cross-entropy loss by the squared cosine distance 3, assume a single feature map (b k = 1 for all k), and focus on the GNN described in Section 4.1 (although our analysis carries equally to describe the line graph version; see remarks below). We also make the simplifying assumption to replace the layer-wise spatial batch normalization by a simpler projection onto the unit 2 ball (thus we do not remove the mean). Without loss of generality, assume that the input graph G has size n, and denote by F = {A 1, . . ., A Q} the family of graph operators appearing in. Each layer thus applies an arbitrary polynomial DISPLAYFORM0 q A q to the incoming node feature vector x (k). Given an input node vector w ∈ R n, the network output can thus be written aŝ DISPLAYFORM1 We highlight that this linear GNN setup is fundamentally different from the linear fully-connected neural networks (that is, neural networks with linear activation function), whose landscape has been analyzed in. First, the output of the GNN is on the unit sphere, which has a different geometry. Next, since the operators in F depend on the input graph, they introduce fluctuations in the landscape. In general, the operators in F are not commutative, but by considering the generalized Krylov subspace generated by powers of F, DISPLAYFORM2 Given the target y ∈ R n, the loss incurred by each pair (G, y) becomes 1 − | e,y | 2 e 2, and therefore the population loss, when expressed in terms of β, equals DISPLAYFORM3 The landscape is thus specified by a pair of random matrices Y n, X n ∈ R M ×M.Assuming that EX n 0, we write the Cholesky decomposition of EX n as EX n = R n R T n, and define DISPLAYFORM4 denote the eigenvalues of K in nondecreasing order. Then, the following theorem establishes that under appropriate assumptions, the concentration of relevant random matrices around their mean controls the energy gaps between local and global minima of L. DISPLAYFORM5, and assume that all four quantities are finite. Then if DISPLAYFORM6, where ηn,µn,νn,δn = O(δ n) for given η n, µ n, ν n as δ n → 0 and its formula is given in the appendix. Corollary 5.2. If (η n) n∈N *, (µ n) n∈N *, (ν n) n∈N * are all bounded sequences, and lim n→∞ δ n = 0, DISPLAYFORM7 The main strategy of the proof is to consider the actual loss function L n as a perturbation of DISPLAYFORM8, which has a landscape that is easier to analyze and 3 to account for the invariance up to global flip of label does not have poor local minima, since it is equivalent to a quadratic form defined over the sphere S M −1. Applying this theorem requires estimating spectral fluctuations of the pair X n, Y n, which in turn involve the spectrum of the C * algebras generated by the non-commutative family F. For example, for stochastic block models, it is an open problem how the bound behaves as a function of the parameters p and q. Another interesting question is to understand how the asymptotics of our landscape analysis relate to the hardness of estimation as a function of the signal-to-noise ratio. Finally, another open question is to what extent our could be extended to the non-linear residual GNN case, perhaps leveraging ideas from. We present experiments on community detection in synthetic datasets (Sections 6.1, 6.2 and Appendix C.1) as well as real-world datasets (Section 6.3). In the synthetic experiments, the performance is measured by the overlap between predicted (ŷ) and true labels (y), which quantifies how much better than random guessing a predicted labeling is, given by DISPLAYFORM0, where δ is the Kronecker delta function, and this quantity is maximized over global permutations within a graph of the set of labels. In the real-world datasets, as the communies are overlapping and not balanced, the prediction accuracy is measured by 1 n u δ y(u),ŷ(u), and the set of permutations to be maximized over is described in Section 4.3. We used Adamax with learning rate 0.004 across all experiments. All the neural network models have 30 layers and 8 features in the middle layers (i.e., b k = 8) for experiments in Sections 6.1 and 6.2, and 20 layers and 6 features for Section 6.3. GNNs and LGNNs have J = 2 across the experiments except the ablation experiments in Section C.3. The stochastic block model is a random graph model with planted community structure. In its simplest form, the graph consists of |V | = n nodes, which are partitioned into C communities, that is, each node is assigned a label y ∈ {1, ..., C}. An edge connecting any two vertices u, v is drawn independently at random with probability p if y(v) = y(u), and with probability q otherwise. In the binary case (i.e. C = 2), the sparse regime, where p, q 1/n, is well understood and provides an initial platform to compare the GNN and LGNN with provably optimal recovery algorithms (Appendix B). We consider two learning scenarios. In the first scenario, we choose different pairs of p and q, and train the models for each pair separately. In particular, for each pair of (p i, q i), we sample 6000 graphs under G ∼ SBM (n = 1000, p i, q i, C = 2) and then train the models for each i. In the second scenario, reported in Appendix C.2, we train a single set of parameters θ from a set of 6000 graphs sampled from a mixture of SBM with different pairs of (p i, q i), and average degree. Importantly, his setup shows that our models are not simply approximating known algorithms such as BP for particular SBM parameters, since the parameters vary in this dataset. For the first scenario, we chose five different pairs of (p i, q i) while fixing p i + q i, thereby corresponding to different signal-to-noise ratios (SNRs). FIG2 reports the performance of our models on the binary SBM model for the different SNRs, compared with baseline methods including BP, spectral methods using the normalized Laplacian and the Bethe Hessian as well as Graph Attention Networks (GAT) 5 from. We observe that both GNN and LGNN reach the performance of BP. In addition, even the linear LGNN achieves a performance that is quite close to that of BP, in accordance to the spectral approximations of BP given by the Bethe Hessian (see supplementary), and significantly outperforms performing 30 power iterations on the Bethe Hessian or the normalized Laplacian, as was done in the spectral methods. We also notice that our models outperform GAT in this task. We ran experiments in the dissociative case (q > p), as well as with C = 3 communities and obtained similar , which are not reported here. Table 1: Performance of different models on 5-community dissociative SBM graphs with n = 400, C = 5, p = 0, q = 18/n, corresponding to an average degree of 14.5. The first row gives the average overlap across test graphs, and the second row gives the graph-wise standard deviation of the overlap. In SBM with fewer than 4 communities, it is known that BP provably reaches the information-theoretic threshold BID0 Massoulié, 2014; ). The situation is different for k > 4, where it is conjectured that a gap emerges between the theoretical performance of MLE estimators and the performance of any polynomial-time estimation procedure . In this context, one can use the GNN models to search the space of the generalizations of BP, and attempt to improve upon the detection performance of BP for scenarios where the SNR falls within the computational-to-statistical gap. Table 1 presents for the 5-community dissociative SBM, with n = 400, p = 0 and q = 18/n. The SNR in this setup is above the information-theoretic threshold but below the asymptotic threshold above which BP is able to detect . Note that since p = 0, this also amounts to a graph coloring problem. We see that the GNN and LGNN models outperform BP in this experiment, indeed opening up the possibility to reduce the computation-information gap. That said, our model may taking advantage of finite-size effects, which will vanish as n → ∞. The asymptotic study of these gains is left for future work. In terms of average test accuracy, LGNN has the best performance. In particular, it outperforms the symmetric version of LGNN, emphasizing the importance of the non-backtracking matrix used in LGNN. Although equipped with the attention mechanism, GAT does not explicitly incorporate in itself the degree matrix, the power graph adjacency matrices or the line graph structure, and has inferior performance compared with the GNN and LGNN models. Further ablation studies on GNN and LGNN are described in Section C.3. We now compare the models on the SNAP datasets, whose domains range from social networks to hierarchical co-purchasing networks. We obtain the training set as follows. For each SNAP dataset, we start by focusing only on the 5000 top quality communities provided by the dataset. We then identify edges (i, j) that cross at least two different communities. For each of such edges, we consider pairs of communities C 1, C 2 such that i / ∈ C 2 and j / ∈ C 1, i ∈ C 1, j ∈ C 2, and extract the subset of nodes determined by C 1 ∪ C 2 together with the edges among them. The ing graph is connected since each community is connected. Finally, we divide the dataset into training and testing sets by enforcing that no community belongs to both the training and the testing set. In our experiment, due to computational limitations, we restrict our attention to the three smallest datasets in the SNAP collection (Youtube, DBLP and Amazon), and we restrict the largest community size to 200 nodes, which is a conservative bound. We compare the performance of GNN and LGNN models with GAT as well as the CommunityAffiliation Graph Model (AGM), which is a generative model proposed in Yang & Leskovec (2012b) that captures the overlapping structure of real-world networks. Community detection can be achieved by fitting AGM to a given network, which was shown to outperform some state-of-the-art algorithms. TAB1 compares the performance, measured with a 3-class (C = {{1}, {2}, {1, 2}}) classification accuracy up to global permutation 1 ↔ 2. GNN, LGNN, LGNN-S and GAT yield similar and outperform AGMfit. It further illustrates the benefits of data-driven models that strike the right balance between expressivity and structural design. In this work, we have studied data-driven approaches to supervised community detection with graph neural networks. Our models achieve comparable performance to BP in binary SBM for various SNRs, and outperform BP in the sparse regime of 5-class SBM that falls between the computationalto-statistical gap. This is made possible by considering a family of graph operators including the power graph adjacency matrices, and importantly by introducing the line graph equipped with the non-backtracking matrix. We also provided a theoretical analysis of the optimization landscapes of simplified linear GNN for community detection and showed the gap between the loss value at local and global minima are bounded by quantities related to the concentration of certain random matricies. One word of caution is that our empirical are inherently non-asymptotic. Whereas models trained for given graph sizes can be used for inference on arbitrarily sized graphs (owing to the parameter sharing of GNNs), further work is needed in order to understand the generalization properties as |V | increases. Nevertheless, we believe our work opens up interesting questions, namely better understanding how our on the energy landscape depend upon specific signal-to-noise ratios, or whether the network parameters can be interpreted mathematically. This could be useful in the study of computational-to-statistical gaps, where our model can be used to inquire about the form of computationally tractable approximations. Another current limitation of our model is that it presumes a fixed number of communities to be detected. Other directions of future research include the extension to the case where the number of communities is unknown and varied, or even increasing with |V |, as well as applications to ranking and edge-cut problems. A PROOF OF THEOREM 5.1For simplicity and with an abuse of notation, in the remaining part we redefine L andL in the following way, to be the negative of their original definition in the main section: DISPLAYFORM0. Thus, minimizing the loss function is equivalent to maximizing the function L n (β) redefined here. We write the Cholesky decomposition of EX n as EX n = R n R T n, and define DISPLAYFORM1 n ) T, and ∆B n = B n − I n. Given a symmetric matrix K ∈ R M ×M, we let λ 1 (K), λ 2 (K),..., λ M (K) denote the eigenvalues of K in nondecreasing order. Let us denote byβ g a global minimum of the mean-field lossL n. Taking a step further, we can extend this bound to the following one (the difference is in the second term on the right hand side): DISPLAYFORM0 Proof of Lemma A.1. We consider two separate cases: The first case is whenL n (β l) ≥L n (β g). DISPLAYFORM1 The other case is whenL DISPLAYFORM2 Hence, to bound the "energy gap" |L n (β l) − L n (β g)|, if suffices to bound the three terms on the right hand side of Lemma A.1 separately. First, we consider the second term, DISPLAYFORM3 Thus, we apply a change-of-variable and try to bound DISPLAYFORM4 n, where R n is invertible, we know that λ 1 (∇ 2 S n (γ l)) ≤ 0, thanks to the following lemma: DISPLAYFORM5 Next, we relate the left hand side of the inequality above to cos(γ l,γ g), thereby obtaining an upper bound on [1 − cos 2 (γ l,γ g)], which will then be used to bound |S n (γ l) −S n (γ g)|. DISPLAYFORM6 Thus, if we define DISPLAYFORM7 To bound λ 1 (∇ 2S n (γ)), we bound λ 1 (Q 1) and Q 2 as follows:SinceĀ n is symmetric, letγ 1,...γ M be the orthonormal eigenvectors ofĀ n corresponding to nonincreasing eigenvalues l 1,... l M. Note that the global minimum satisfiesγ g = ±γ 1. Write DISPLAYFORM8 Then, DISPLAYFORM9 To bound Q 2: DISPLAYFORM10 Therefore, DISPLAYFORM11 Thus, DISPLAYFORM12 This yields the desired lemma. Combining inequality 8 and Lemma A.3, we get DISPLAYFORM13 Thus, to bound the angle between γ l andγ g, we can aim to bound ∇S n (γ l) and ∇ 2 S n (γ l) − ∇ 2S n (γ l) as functions of the quantities µ n, ν n and δ n. Lemma A.4. DISPLAYFORM14 Proof of Lemma A.4. DISPLAYFORM15 Combining equations 17 and 18, we get DISPLAYFORM16 Then, by the generalized Hölder's inequality, DISPLAYFORM17 Hence, written in terms of the quantities µ n, ν n and δ n, we have DISPLAYFORM18 Lemma A.5. With δ n = (E ∆B n 6) 1 6, E|λ 1 (B n)| 6 ≤ 64 + 63δ 6 n Proof of Lemma A.5. DISPLAYFORM19 Note that DISPLAYFORM20 and for k ∈ {1, 2, 3, 4, 5}, if X is a nonnegative random variable, DISPLAYFORM21 Therefore, E|λ 1 (B n)| 6 ≤ 64 + 63E ∆B n 6.From now on, for simplicity, we introduce δ n = (64 + 63δ DISPLAYFORM22 Proof of Lemma A.6. DISPLAYFORM23 where DISPLAYFORM24 DISPLAYFORM25 DISPLAYFORM26 H 4, and we try to bound each term on the right hand side separately. For the first term, there is DISPLAYFORM27 Applying generalized Hölder's inequality, we obtain DISPLAYFORM28 For the second term, there is DISPLAYFORM29 Hence, DISPLAYFORM30 Applying generalized Hölder's inequality, we obtain DISPLAYFORM31 For H 3, note that DISPLAYFORM32 Hence, DISPLAYFORM33 Thus, DISPLAYFORM34 Applying generalized Hölder's inequality, we obtain DISPLAYFORM35 For the last term, DISPLAYFORM36 Thus, DISPLAYFORM37 Applying generalized Hölder's inequality, we obtain DISPLAYFORM38 Therefore, summing up the bounds above, we obtain DISPLAYFORM39 n (γ) ≤µ n ν n δ n (10 + 14ν n + 2δ n ν n + 16ν 2 n + 16δ n ν n + 8δ n ν 2 n + 8δ n ν n + 8δ n δ n ν) Hence, combining inequality 15, Lemma A.4 and Lemma A.6, we get 1 − cos 2 (γ l,γ g) ≤η n [4µ n ν n δ n (1 + 3ν n δ n µ n) + 1 2 µ n ν n δ n (10 + 14ν n + 2δ n ν n + 16ν 2 n + 16δ n ν n + 8δ n ν 2 n + 8δ n ν n + 8δ n δ n ν)] =µ n ν n δ n η n (9 + 19ν n + 5δ n ν n + 8ν 2 n + 8δ n ν n + 4δ n ν n 2 + 4δ n ν n + 4δ n δ n ν n)n +8δ n ν n +4δ n ν n 2 +4δ n ν n +4δ n δ n ν n. Thus, DISPLAYFORM40 Following the notations in the proof of Lemma A.3, we write DISPLAYFORM41 Since Y n is positive semidefinite, EY n is also positive semidefinite, and henceĀ n = R DISPLAYFORM42 Next, we bound the first and the third term on the right hand side of the inequality in Lemma A.1. Lemma A.7. ∀β, DISPLAYFORM43 Thus, we get the desired lemma by the generalized Hölder's inequality. Combining inequality 46, inequality 48 and Lemma A.7, we get DISPLAYFORM44 Meanwhile, DISPLAYFORM45 Hence, DISPLAYFORM46, or DISPLAYFORM47 Therefore, We consider graphs G = (V, E), modeling a system of N = |V | elements presumed to exhibit some form of community structure. The adjacency matrix A associated with G is the N × N binary matrix such that A i,j = 1 when (i, j) ∈ E and 0 otherwise. We assume for simplicity that the graphs are undirected, therefore having symmetric adjacency matrices. The community structure is encoded in a discrete label vector s: V → {1, . . ., C} that assigns a community label to each node, and the goal is to estimate s from observing the adjacency matrix. DISPLAYFORM48 In the binary case, we can set s(i) = ±1 without loss of generality. Furthermore, we assume that the communities are associative, which means two nodes from the same community are more likely to be connected than two nodes from the opposite communities. The quantity DISPLAYFORM49 measures the cost associated with cutting the graph between the two communities encoded by s, and we wish to minimize it under appropriate constraints . Note that i,j A i,j = s T Ds, with D = diag(A1) (called the degree matrix), and so the cut cost can be expressed as a positive semidefinite quadratic form min DISPLAYFORM50 that we wish to minimize. This shows a fundamental connection between the community structure and the spectrum of the graph Laplacian ∆ = D − A, which provides a powerful and stable relaxation of the discrete combinatorial optimization problem of estimating the community labels for each node. The eigenvector of ∆ associated with the smallest eigenvalue is, trivially, 1, but its Fiedler vector (the eigenvector associated with the second smallest eigenvalue) reveals important community information of the graph under appropriate conditions , and is associated with the graph conductance under certain normalization schemes .Given linear operator L(A) extracted from the graph (that we assume symmetric), we are thus interested in extracting eigenvectors at the edge of its spectrum. A particularly simple algorithm is the power iteration method. Indeed, the Fiedler vector of L(A) can be obtained by first extracting the leading eigenvector v ofà = L(A) I − L(A), and then iteratively compute DISPLAYFORM51 Unrolling power iterations and recasting the ing model as a trainable neural network is akin to the LISTA sparse coding model, which unrolled iterative proximal splitting algorithms .Despite the appeal of graph Laplacian spectral approaches, it is known that these methods fail in sparsely connected graphs . Indeed, in such scenarios, the eigenvectors of the graph Laplacian concentrate on nodes with dominant degrees, losing their correlation with the community structure. In order to overcome this important limitation, people have resorted to ideas inspired from statistical physics, as explained next. Graphs with labels on nodes and edges can be cast as a graphical model where the aim of clustering is to optimize label agreement. This can be seen as a posterior inference task. If we simply assume the graphical model is a Markov Random Field (MRF) with trivial compatibility functions for cliques greater than 2, the probability of a label configuration σ is given by DISPLAYFORM0 Generally, computing marginals of multivariate discrete distributions is exponentially hard. For instance, in the case of P(σ i) we are summing over |X| n−1 terms (where X is the state space of discrete variables). But if the graph is a tree, we can factorize the MRF more efficiently to compute the marginals in linear time via a dynamic programming method called the sum-product algorithm, also known as belief propagation (BP). An iteration of BP is given by DISPLAYFORM1 The beliefs (b i→j (σ i)) are interpreted as the marginal distributions of σ i. Fixed points of BP can be used to recover marginals of the MRF above. In the case of the tree, the correspondence is exact: DISPLAYFORM2 Certain sparse graphs, like SBM with constant degree, are locally similar to trees for such an approximation to be successful . However, convergence is not guaranteed in graphs that are not trees. Furthermore, in order to apply BP, we need a generative model and the correct parameters of the model. If unknown, the parameters can be derived using expectation maximization, further adding complexity and instability to the method since it is possible to learn parameters for which BP does not converge. The BP equations have a trivial fixed-point where every node takes equal probability in each group. Linearizing the BP equation around this point is equivalent to spectral clustering using the nonbacktracking matrix (NB), a matrix defined on the directed edges of the graph that indicates whether two edges are adjacent and do not coincide. Spectral clustering using NB gives significant improvements over spectral clustering with different versions of the Laplacian matrix L and the adjacency matrix A. High degree fluctuations drown out the signal of the informative eigenvalues in the case of A and L, whereas the eigenvalues of NB are confined to a disk in the complex plane except for the eigenvalues that correspond to the eigenvectors that are correlated with the community structure, which are therefore distinguishable from the rest. NB matrices are still not optimal in that they are matrices on the edge set and also asymmetric, therefore unable to enjoy tools of numerical linear algebra for symmetric matrices. showed that a spectral method can do as well as BP in the sparse SBM using the Bethe Hessian matrix defined by BH(r):= (r 2 − 1)I − rA + D, where r is a scalar parameter. This is due to a one-to-one correspondence between the fixed points of BP and the stationary points of the Bethe free energy (corresponding Gibbs energy of the Bethe approximation) . The Bethe Hessian is a scaling of the Hessian of the Bethe free energy at an extrema corresponding to the trivial fixed point of BP. Negative eigenvalues of BH(r) correspond to phase transitions in the Ising model where new clusters become identifiable. The success of the spectral method using the Bethe Hessian gives a theoretical motivation for having a family of matrices including I, D and A in our GNN defined in Section 4, because in this way the GNN is capable of expressing the algorithm of performing power iteration on the Bethe Hessian. While belief propagation requires a generative model, and the spectral method using the Bethe Hessian requires the selection of the parameter r, whose optimal value also depends on the underlying generative model, the GNN does not need a generative model and is able to learn and then make predictions in a data-driven fashion. We briefly review the main properties needed in our analysis, and refer the interested reader to BID0 for an excellent recent review. The stochastic block model (SBM) is a random graph model denoted by SBM (n, p, q, C). Implicitly there is an F: V → {1, . . ., C} associated with each SBM graph, which assigns community labels to each vertex. One obtains a graph from this generative model by starting with n vertices and connecting any two vertices u, v independently at random with probability p if F (v) = F (u), and with probability q if F (v) = F (u). We say the SBM is balanced if the communities are the same size. LetF n: V → {1, C} be our predicted community labels for SBM (n, p, q, C). We say that the F n's give exact recovery on a sequence {SBM (n, p, q)} n if P(F n =F n) → n 1, and give weak recovery or detection if ∃ > 0 such that P(|F n −F n | ≥ 1/k +) → n 1 (i.eF n 's do better than random guessing).It is harder to tell communities apart if p is close to q (if p = q we just get an Erdős Renyi random graph, which has no communities). In the two community case, It was shown that exact recovery is possible on SBM (n, p = a log n n, q = b log n n) if and only if BID1 BID1. For exact recovery to be possible, p, q must grow at least O(log n) or else the sequence of graphs will not be connected, and thus the vertex labels will be underdetermined. There is no information-computation gap in this regime, and so there exist polynomial time algorithms when recovery is possible BID0 ) ). In the sparser regime of constant degree, SBM (n, p = a n, q = b n), detection is the best we could hope for. The constant degree regime is also of most interest to us for real world applications, as most large datasets have bounded degree and are extremely sparse. It is also a very challenging regime; spectral approaches using the Laplacian in its various (un)normalized forms or the adjacency matrix, as well as semidefinite programming (SDP) methods do not work well in this regime due to large fluctuations in the degree distribution that prevent eigenvectors from concentrating on the clusters BID0. first proposed the BP algorithm on the SBM, which was proven to yield Bayesian optimal values in. DISPLAYFORM0 In the constant degree regime with k balanced communities, the signal-to-noise ratio is defined as SN R = (a − b) 2 /(k(a + (k + 1)b)), and the Kesten-Stigum (KS) threshold is given by SN R = 1 BID0. When SN R > 1, detection can be solved in polynomial time by BP BID0 ). For k = 2, it has been shown that when SN R < 1, detection is not solvable, and therefore SN R = 1 is both the computational and the information theoretic threshold BID0. For k > 4, it has been shown that for some SN R < 1, there exists non-polynomial time algorithms that are able to solve the detection problem BID0. Furthermore, it is conjectured that no polynomial time algorithm can solve detection when SN R < 1, in which case a gap would exist between the information theoretic threshold and the KS threshold BID0.C FURTHER EXPERIMENTS C.1 GEOMETRIC BLOCK MODEL Table 3: Overlap performance (in percentage) of GNN and LGNN on graphs generated by the Geometric Block Model compared with two spectral methods Model S = 1 S = 2 S = 4 Norm. Laplacian 1 ± 0.5 1 ± 0.6 1 ± 1 Bethe Hessian 18 ± 1 38 ± 1 38 ± 2 GNN 20 ± 0.4 39 ± 0.5 39 ± 0.5 LGNN 22 ± 0.4 50 ± 0.5 76 ± 0.5The success of belief propagation on the SBM relies on its locally hyperbolic properties, which make it treelike with high probability. This behavior is completely different if one considers random graphs with locally Euclidean geometry. The Geometric Block Model is a random graph generated as follows. We start by sampling n points x 1,..., x n i.i.d. from a Gaussian mixture model given by means µ 1,... µ k ∈ R d at distances S apart and identity covariances. The label of each sampled point corresponds to which Gaussian it belongs to. We then draw an edge between two nodes i, j if x i − x j ≤ T / √ n. Due to the triangle inequality, the model contains a large number of short cycles, which affects the performance of loopy belief propagation. This. left: k = 2. We verify that BH(r) models cannot perform detection at both ends of the spectrum simultaneously.motivates other estimation algorithms based on motif-counting that require knowledge of the model likelihood function . Table 3 shows the performance of GNN and LGNN on the binary GBM model, obtained with d = 2, n = 500, T = 5 √ 2 and varying S, as well as the performances of two spectral methods, using respectively the normalized Laplacian and the Bethe Hessian, which approximates BP around its stationary solution. We note that LGNN model, thanks to its added flexibility and the multiscale nature of its generators, is able to significantly outperform both spectral methods as well as the baseline GNN. We report here our experiments on the SBM mixture, generated with G ∼ SBM (n = 1000, p = kd − q, q ∼ Unif(0,d − d), C = 2), where the average degreed is either fixed constant or also randomized withd ∼ Unif(1, t). FIG4 shows the overlap obtained by our model compared with several baselines. Our GNN model is either competitive with BH or outperforms BH, which achieves the state of the art along with , despite not having any access to the underlying generative model (especially in cases where GNN was trained on a mixture of SBM and thus must be able to generalize the r parameter in BH). They all outperform by a wide margin spectral clustering methods using the symmetric Laplacian and power method applied to BH I − BH using the same number of layers as our model. Thus GNN's ability to predict labels goes beyond approximating spectral decomposition via learning the optimal r for BH(r). The model architecture could allow it to learn a higher dimensional function of the optimal perturbation of the multiscale adjacency basis, as well as nonlinear power iterations, that amplify the informative signals in the spectrum. Compared to f, each of h, i and k has one fewer operator in F, and j has two fewer. We see that with the absence of A, k has much worse performance than the other four, indicating the importance of the power graph adjacency matrices. Interestingly, with the absence of I, i actually has better average accuracy than f. One possibly explanation is that in SBM, each node has the same expected degree, and hence I may be not very far from D, which might make having both I and D in the family redundant to some extent. Comparing GNN models a, b and c, we see it is not the case that having larger J will always lead to better performance. Compared to f, GNN models c, d and e have similar numbers of parameters but all achieve worse average test accuracy, indicating that the line graph structure is essential for the good performance of LGNN in this experiment. In addition, l also performs worse than f, indicating the significance of the non-backtracking line graph compared to the symmetric line graph. | We propose a novel graph neural network architecture based on the non-backtracking matrix defined over the edge adjacencies and demonstrate its effectiveness in community detection tasks on graphs. | 673 | scitldr |
Residual networks (Resnets) have become a prominent architecture in deep learning. However, a comprehensive understanding of Resnets is still a topic of ongoing research. A recent view argues that Resnets perform iterative refinement of features. We attempt to further expose properties of this aspect. To this end, we study Resnets both analytically and empirically. We formalize the notion of iterative refinement in Resnets by showing that residual architectures naturally encourage features to move along the negative gradient of loss during the feedforward phase. In addition, our empirical analysis suggests that Resnets are able to perform both representation learning and iterative refinement. In general, a Resnet block tends to concentrate representation learning behavior in the first few layers while higher layers perform iterative refinement of features. Finally we observe that sharing residual layers naively leads to representation explosion and hurts generalization performance, and show that simple existing strategies can help alleviating this problem. Traditionally, deep neural network architectures (e.g. , , etc.) have been compositional in nature, meaning a hidden layer applies an affine transformation followed by non-linearity, with a different transformation at each layer. However, a major problem with deep architectures has been that of vanishing and exploding gradients. To address this problem, solutions like better activations , weight initialization methods; and normalization methods; BID0 have been proposed. Nonetheless, training compositional networks deeper than 15 − 20 layers remains a challenging task. Recently, residual networks (Resnets He et al. (2016a) ) were introduced to tackle these issues and are considered a breakthrough in deep learning because of their ability to learn very deep networks and achieve state-of-the-art performance. Besides this, performance of Resnets are generally found to remain largely unaffected by removing individual residual blocks or shuffling adjacent blocks. These attributes of Resnets stem from the fact that residual blocks transform representations additively instead of compositionally (like traditional deep networks). This additive framework along with the aforementioned attributes has given rise to two school of thoughts about Resnets-the ensemble view where they are thought to learn an exponential ensemble of shallower models , and the unrolled iterative estimation view; , where Resnet layers are thought to iteratively refine representations instead of learning new ones. While the success of Resnets may be attributed partly to both these views, our work takes steps towards achieving a deeper understanding of Resnets in terms of its iterative feature refinement perspective. Our contributions are as follows:1. We study Resnets analytically and provide a formal view of iterative feature refinement using Taylor's expansion, showing that for any loss function, a residual block naturally encourages representations to move along the negative gradient of the loss with respect to hidden representations. Each residual block is therefore encouraged to take a gradient step in order to minimize the loss in the hidden representation space. We empirically confirm this by measuring the cosine between the output of a residual block and the gradient of loss with respect to the hidden representations prior to the application of the residual block.2. We empirically observe that Resnet blocks can perform both hierarchical representation learning (where each block discovers a different representation) and iterative feature refinement (where each block improves slightly but keeps the semantics of the representation of the previous layer). Specifically in Resnets, lower residual blocks learn to perform representation learning, meaning that they change representations significantly and removing these blocks can sometimes drastically hurt prediction performance. The higher blocks on the other hand essentially learn to perform iterative inference-minimizing the loss function by moving the hidden representation along the negative gradient direction. In the presence of shortcut connections 1, representation learning is dominantly performed by the shortcut connection layer and most of residual blocks tend to perform iterative feature refinement.3. The iterative refinement view suggests that deep networks can potentially leverage intensive parameter sharing for the layer performing iterative inference. But sharing large number of residual blocks without loss of performance has not been successfully achieved yet. Towards this end we study two ways of reusing residual blocks: 1. Sharing residual blocks during training; 2. Unrolling a residual block for more steps that it was trained to unroll. We find that training Resnet with naively shared blocks leads to bad performance. We expose reasons for this failure and investigate a preliminary fix for this problem. Recently, several papers have investigated the behavior of Resnets (a). In , authors argue that Resnets are an ensemble of relatively shallow networks. This is based on the unraveled view of Resnets where there exist an exponential number of paths between the input and prediction layer. Further, observations that shuffling and dropping of residual blocks do not affect performance significantly also support this claim. Other works discuss the possibility that residual networks are approximating recurrent networks . This view is in part supported by the observation that the mathematical formulation of Resnets bares similarity to LSTM , and that successive layers cooperate and preserve the feature identity. Resnets have also been studied from the perspective of boosting theory. In this work the authors propose to learn Resnets in a layerwise manner using a local classifier. Our work has critical differences compared with the aforementioned studies. Most importantly we focus on a precise definition of iterative inference. In particular, we show that a residual block approximate a gradient descent step in the activation space. Our work can also be seen as relating the gap between the boosting and iterative inference interpretations since having a residual block whose output is aligned with negative gradient of loss is similar to how gradient boosting models work. Humans frequently perform predictions with iterative refinement based on the level of difficulty of the task at hand. A leading hypothesis regarding the nature of information processing that happens in the visual cortex is that it performs fast feedforward inference for easy stimuli or when quick response time is needed, and performs iterative refinement of prediction for complex stimuli . The latter is thought to be done by lateral connections within individual layers in the brain that iteratively act upon the current state of the layer to update it. This mechanism allows the brain to make fine grained predictions on complex tasks. A characteristic attribute of this mechanism is the recursive application of the lateral connections which can be thought of as shared weights in a recurrent model. The above views suggest that it is desirable to have deep network models that perform parameter sharing in order to make the iterative inference view complete. Our goal in this section is to formalize the notion of iterative inference in Resnets. We study the properties of representations that residual blocks tend to learn, as a of being additive in nature, in contrast to traditional compositional networks. Specifically, we consider Resnet architectures (see FIG0) where the first hidden layer is a convolution layer, which is followed by L residual blocks which may or may not have shortcut connections in between residual blocks. A residual block applied on a representation h i transforms the representation as, DISPLAYFORM0 Consider L such residual blocks stacked on top of each other followed by a loss function. Then, we can Taylor expand any given loss function L recursively as, DISPLAYFORM1 Here we have Taylor expanded the loss function around h L−1. We can similarly expand the loss function recursively around h L−2 and so on until h i and get, DISPLAYFORM2 Notice we have explicitly only written the first order terms of each expansion. The rest of the terms are absorbed in the higher order terms O. Further, the first order term is a good approximation when the magnitude of F j is small enough. In other cases, the higher order terms come into effect as well. Thus in part, the loss equivalently minimizes the dot product between F (h i) and DISPLAYFORM3 ∂hi, which can be achieved by making F (h i) point in the opposite half space to that of ∂L(hi) ∂hi. In other words, h i + F (h i) approximately moves h i in the same half space as that of − ∂L(hi) ∂hi. The overall training criteria can then be seen as approximately minimizing the dot product between these 2 terms along a path in the h space between h i and h L such that loss gradually reduces as we take steps from h i to h L. The above analysis is justified in practice, as Resnets' top layers output F j has small magnitude , which we also report in Fig. 2.Given our analysis we formalize iterative inference in Resnets as moving down the energy (loss) surface. It is also worth noting the resemblance of the function of a residual block to stochastic gradient descent. We make a more formal argument in the appendix. introduce for the purpose of our analysis (described below). Our main goal is to validate that residual networks perform iterative refinement as discussed above, showing its various consequences. Specifically, we set out to empirically answer the following questions: • Do residual blocks in Resnets behave similarly to each other or is there a distinction between blocks that perform iterative refinement vs. representation learning? • Is the cosine between ∂L(hi) ∂hi and F i (h i) negative in residual networks?• What kind of samples do residual blocks target?• What happens when layers are shared in Resnets?Resnet architectures: We use the following four architectures for our analysis:1. Original Resnet-110 architecture: This is the same architecture as used in He et al. (2016b) starting with a 3 × 3 convolution layer with 16 filters followed by 54 residual blocks in three different stages (of 18 blocks each with 16, 32 and 64 filters respectively) each separated by a shortcut connections (1 × 1 convolution layers that allow change in the hidden space dimensionality) inserted after the 18 th and 36 th residual blocks such that the 3 stages have hidden space of height-width 32 × 32, 16 × 16 and 8 × 8. The model has a total of 1, 742, 762 parameters. This architecture starts with a 3 × 3 convolution layer with 100 filters. This is followed by 10 residual blocks such that all hidden representations have the same height and width of 32 × 32 and 100 filters are used in all the convolution layers in residual blocks as well.3. Avg-pooling Resnet: This architecture repeats the residual blocks of the single representation Resnet (described above) three times such that there is a 2 × 2 average pooling layer after each set of 10 residual blocks that reduces the height and width after each stage by half. Also, in contrast to single representation architecture, it uses 150 filters in all convolution layers. This is followed by the classification block as in the single representation Resnet. It has 12, 201, 310 parameters. We call this architecture the avg-pooling architecture. We also ran experiments with max pooling instead of average pooling but do not report because they were similar except that max pool acts more non-linearly compared with average pooling, and hence the metrics from max pooling are more similar to those from original Resnet.4. Wide Resnet: This architecture starts with a 3 × 3 convolution layer followed by 3 stages of four residual blocks with 160, 320 and 640 number of filters respectively, and 3 × 3 kernel size in all convolution layers. This model has a total of 45,732,842 parameters. For all architectures, we use He-normal weight initialization as suggested in , and biases are initialized to 0.For residual blocks, we use BatchNorm→ReLU→Conv→BatchNorm→ReLU→Conv as suggested in He et al. (2016b).The classifier is composed of the following elements: BatchNorm→ReLU→AveragePool In this experiment we directly validate our theoretical prediction about Resnets minimizing the dot product between gradient of loss and block output. To this end compute the cosine loss DISPLAYFORM0. A negative cosine loss and small F i together suggest that F i is refining features by moving them in the half space of − ∂L(hi) ∂hi, thus reducing the loss value for the corresponding data samples. Figure 4 shows the cosine loss for CIFAR-10 on train and validation sets. These figures show that cosine loss is consistently negative for all residual blocks but especially for the higher residual blocks. Also, notice for deeper architectures (original Resnet and pooling Resnet), the higher blocks achieve more negative cosine loss and are thus more iterative in nature. Further, since the higher residual blocks make smaller changes to representation (figure 2), the first order Taylor's term becomes dominant and hence these blocks effectively move samples in the half space of the negative cosine loss thus reducing loss value of prediction. This formalizes the sense in which residual blocks perform iterative refinement of features-move representations in the half space of − ∂L(hi) ∂hi. In this section, we are interested in investigating the behavior of residual layers in terms of representation learning vs. refinement of features. To this end, we perform the following experiments. 2 ratio DISPLAYFORM0. For every such block in a Resnet, we measure the 2 ratio of F i (h i) 2 / h i 2 averaged across samples. This ratio directly shows how significantly F i changes the representation h i; a large change can be argued to be a necessary condition for layer to perform representation learning. Figure 2 shows the 2 ratio for CIFAR-10 on train and validation sets. For single representationResnet and pooling Resnet, the first few residual blocks (especially the first residual block) changes representations significantly (up to twice the norm of the original representation), while the rest of the higher blocks are relatively much less significant and this effect is monotonic as we go to higher blocks. However this effect is not as drastic in the original Resnet and wide Resnet architectures which have two 1 × 1 (shortcut) convolution layers, thus adding up to a total of 3 convolution layers in the main path of the residual network (notice there exists only one convolution layer in the main path for the other two architectures). This suggests that residual blocks in general tend to learn to refine features but in the case when the network lacks enough compositional layers in the main path, lower residual blocks are forced to change representations significantly, as a proxy for the absence of compositional layers. Additionally, small 2 ratio justifies first order approximation used to derive our main in Sec. 3.2. Effect of dropping residual layer on accuracy: We drop individual residual blocks from trained Resnets and make predictions using the rest of network on validation set. This analysis shows the significance of individual residual blocks towards the final accuracy that is achieved using all the residual blocks. Note, dropping individual residual blocks is possible because adjacent blocks operate in the same feature space. Figure 3 shows the of dropping individual residual blocks. As one would expect given above analysis, dropping the first few residual layers (especially the first) for single representation Resnet and pooling Resnet leads to catastrophic performance drop while dropping most of the higher residual layers have minimal effect on performance. On the other hand, performance drops are not drastic for the original Resnet and wide Resnet architecture, which is in agreement with the observations in 2 ratio experiments above. In another set of experiments, we measure validation accuracy after individual residual block during the training process. This set of experiments is achieved by plugging the classifier right after each residual block in the last stage of hidden representation (i.e., after the last shortcut connection, if any). This is shown in figure 5. The figures show that accuracy increases very gradually when adding more residual blocks in the last stage of all architectures. In this section we investigate which samples get correctly classified after the application of a residual block. Individual residual blocks in general lead to small improvements in performance. Intuitively, since these layers move representations minimally (as shown by previous analysis), the samples that lead to these minor accuracy jump should be near the decision boundary but getting misclassified by a slight margin. To confirm this intuition, we focus on borderline examples, defined as examples that require less than 10% probability change to flip prediction to, or from the correct class. We measure loss, accuracy and entropy over borderline examples over last 5 blocks of the network using the network final classifier. Experiment is performed on CIFAR-10 using Resnet-110 architecture. Fig 6 shows evolution of loss and accuracy on three groups of examples: borderline examples, already correctly classified and the whole dataset. While overall accuracy and loss remains similar across the top residual blocks, we observe that a significant chunk of borderline examples gets corrected by the immediate next residual block. This exposes the qualitative nature of examples that these feature refinement layers focus on, which is further reinforced by the fact that entropy decreases for all considered subsets. We also note that while train loss drops uniformly across layers, test sets loss increases after last block. Correcting this phenomenon could lead to improved generalization in Resnets, which we leave for future work. A fundamental requirement for a procedure to be truly iterative is to apply the same function. In this section we explore what happens when we unroll the last block of a trained residual network for more steps than it was trained for. Our main goal is to investigate if iterative inference generalizes to more steps than it was trained on. We focus on the same model as discussed in previous section, Resnet-110, and unroll the last residual block for 20 extra steps. Naively unrolling the network leads to activation explosion (we observe similar behavior in Sec. 4.5). To control for that effect, we added a scaling factor on the output of the last residual blocks. We hypothesize that controlling the scale limits the drift of the activation through the unrolled layer, i.e. they remains in a given neighbourhood on which the network is well behaved. Similarly to Sec. 4.3 we track evolution of We first investigate how unrolling blocks impact loss and accuracy. Loss on train set improved uniformly from 0.0012 to 0.001, while it increased on test set. There are on average 51 borderline examples in test set 2, on which performance is improved from 43% to 53%, which yields slight improvement in accuracy on test set. Next we shift our attention to cosine loss. We observe that cosine loss remains negative on the first two steps without rescaling, and all steps after scaling. Figure 7 shows evolution of loss and accuracy on the three groups of examples: borderline examples, already correctly classified and the whole dataset. Cosine loss and 2 ratio for each block are reported in Appendix E.To summarize, unrolling residual network to more steps than it was trained on improves both loss on train set, and maintains (in given neighbourhood) negative cosine loss on both train and test set. Our suggest that top residual blocks should be shareable, because they perform similar iterative refinement. We consider a shared version of Resnet-110 model, where in each stage we share all the residual blocks from the 5 th block. All shared Resnets in this section have therefore a similar number of parameters as Resnet-38. Contrary to we observe that naively sharing the higher (iterative refinement) residual blocks of a Resnets in general leads to bad performance 3 (especially for deeper Resnets).First, we compare the unshared and shared version of Resnet-110. The shared version uses approximately 3 times less parameters. In Fig. 8, we report the train and validation performances of the Resnet-110. We observe that naively sharing parameters of the top residual blocks leads both to overfitting (given similar training accuracy, the shared Resnet-110 has significantly lower validation performances) and underfitting (worse training accuracy than Resnet-110). We also compared our shared model with a Resnet-38 that has a similar number of parameters and observe worse validation performances, while achieving similar training accuracy. We notice that sharing layers make the layer activations explode during the forward propagation at initialization due to the repeated application of the same operation (Fig 8, right). Consequently, the norm of the gradients also explodes at initialization (Fig. 8, center).To address this issue we introduce a variant of recurrent batch normalization , which proposes to initialize γ to 0.1 and unshare statistics for every step. On top of this strategy, we also unshare γ and β parameters. Tab. 1 shows that using our strategy alleviates explosion problem and leads to small improvement over baseline with similar number of parameters. We also perform an ablation to study, see Figure. 9 (left), which show that all additions to naive strategy are necessary and drastically reduce the initial activation explosion. Finally, we observe a similar trend for cosine loss, intermediate accuracy, and 2 ratio for the shared Resnet as for the unshared Resnet discussed in the previous Sections. Full are reported in Appendix D.Unshared Batch Normalization strategy therefore mitigates this exploding activation problem. This problem, leading to exploding gradient in our case, appears frequently in recurrent neural network. This suggests that future unrolled Resnets should use insights from research on recurrent networks optimization, including careful initialization and parametrization changes (Our main contribution is formalizing the view of iterative refinement in Resnets and showing analytically that residual blocks naturally encourage representations to move in the half space of negative loss gradient, thus implementing a gradient descent in the activation space (each block reduces loss and improves accuracy). We validate theory experimentally on a wide range of Resnet architectures. We further explored two forms of sharing blocks in Resnet. We show that Resnet can be unrolled to more steps than it was trained on. Next, we found that counterintuitively training residual blocks with shared blocks leads to overfitting. While we propose a variant of batch normalization to mitigate it, we leave further investigation of this phenomena for future work. We hope that our developed formal view, and practical , will aid analysis of other models employing iterative inference and residual connections. ∂ho, then it is equivalent to updating the parameters of the convolution layer using a gradient update step. To see this, consider the change in h o from updating parameters using gradient descent with step size η. This is given by, DISPLAYFORM0 Thus, moving h o in the half space of − ∂L ∂ho has the same effect as that achieved by updating the parameters W, b using gradient descent. Although we found this insight interesting, we don't build upon it in this paper. We leave this as a future work. Here we report the experiments as done in sections 4.2 and 4.1, for CIFAR-100 dataset. The plots are shown in figures 10, 11 and 12. The are same as reported in the main text for CIFAR-10. Here we plot the accuracy, cosine loss and 2 ratio metrics corresponding to each individual residual block on validation during the training process for CIFAR-10 (figures 13, 14, 5) and 16, 17). These plots are recorded only for the residual blocks in the last space for each architecture (this is because otherwise the dimensions of the output of the residual block and the classifier will not match). In the case of cosine loss after individual residual block, this set of experiments is achieved by plugging the classifier right after each hidden representation and measuring the cosine between the gradient w.r.t. hidden representation and the corresponding residual block's output. We find that the accuracy after individual residual blocks increases gradually as we move from from lower to higher residua blocks. Cosine loss on the other hand consistently remains negative for all architectures. Finally 2 ratio tends to increase for residual blocks as training progresses. In this section we extend from Sec. 4.5. We report cosine loss, intermediate accuracy, and 2 ratio for naively shared Resnet in FIG0, and with unshared batch normalization in Fig.??. In this section we report additional for unrolling residual network. Figure 20 shows evolution of cosine loss an 2 ratio for Resnet-110 with unrolled last block for 20 additional steps. | Residual connections really perform iterative inference | 674 | scitldr |
We develop end-to-end learned reconstructions for lensless mask-based cameras, including an experimental system for capturing aligned lensless and lensed images for training. Various reconstruction methods are explored, on a scale from classic iterative approaches (based on the physical imaging model) to deep learned methods with many learned parameters. In the middle ground, we present several variations of unrolled alternating direction method of multipliers (ADMM) with varying numbers of learned parameters. The network structure combines knowledge of the physical imaging model with learned parameters updated from the data, which compensate for artifacts caused by physical approximations. Our unrolled approach is 20X faster than classic methods and produces better reconstruction quality than both the classic and deep methods on our experimental system. | We improve the reconstruction time and quality on an experimental mask-based lensless imager using an end-to-end learning approach which incorporates knowledge of the imaging model. | 675 | scitldr |
Deep learning, a rebranding of deep neural network research works, has achieved a remarkable success in recent years. With multiple hidden layers, deep learning models aim at computing the hierarchical feature representations of the observational data. Meanwhile, due to its severe disadvantages in data consumption, computational resources, parameter tuning costs and the lack of explainability, deep learning has also suffered from lots of criticism. In this paper, we will introduce a new representation learning model, namely “Sample-Ensemble Genetic Evolutionary Network” (SEGEN), which can serve as an alternative approach to deep learning models. Instead of building one single deep model, based on a set of sampled sub-instances, SEGEN adopts a genetic-evolutionary learning strategy to build a group of unit models generations by generations. The unit models incorporated in SEGEN can be either traditional machine learning models or the recent deep learning models with a much “narrower” and “shallower” architecture. The learning of each instance at the final generation will be effectively combined from each unit model via diffusive propagation and ensemble learning strategies. From the computational perspective, SEGEN requires far less data, fewer computational resources and parameter tuning efforts, but has sound theoretic interpretability of the learning process and . Extensive experiments have been done on several different real-world benchmark datasets, and the experimental obtained by SEGEN have demonstrated its advantages over the state-of-the-art representation learning models. In recent years, deep learning, a rebranding of deep neural network research works, has achieved a remarkable success. The essence of deep learning is to compute the hierarchical feature representations of the observational data BID8; BID16. With multiple hidden layers, the deep learning models have the capacity to capture very good projections from the input data space to the objective output space, whose outstanding performance has been widely illustrated in various applications, including speech and audio processing BID7;, language modeling and processing BID0; BID19, information retrieval BID10; BID22, objective recognition and computer vision BID16, as well as multimodal and multi-task learning BID27 BID28. By this context so far, various kinds of deep learning models have been proposed already, including deep belief network BID11, deep Boltzmann machine BID22, deep neural network BID13; BID14 and deep autoencoder model BID24.Meanwhile, deep learning models also suffer from several serious criticism due to their several severe disadvantages BID29. Generally, learning and training deep learning models usually demands a large amount of training data, large and powerful computational facilities, heavy parameter tuning costs, but lacks theoretic explanation of the learning process and . These disadvantages greatly hinder the application of deep learning models in many areas which cannot meet the requirements or requests a clear interpretability of the learning performance. Due to these reasons, by this context so far, deep learning research and application works are mostly carried out within/via the collaboration with several big technical companies, but the models proposed by them (involving hundreds of hidden layers, billions of parameters, and using a large cluster with thousands of server nodes BID5) can hardly be applied in other real-world applications. In this paper, we propose a brand new model, namely SEGEN (Sample-Ensemble Genetic Evolutionary Network), which can work as an alternative approach to the deep learning models. Instead of building one single model with a deep architecture, SEGEN adopts a genetic-evolutionary learning strategy to train a group of unit models generations by generations. Here, the unit models can be either traditional machine learning models or deep learning models with a much "narrower" and "shallower" structure. Each unit model will be trained with a batch of training instances sampled form the dataset. By selecting the good unit models from each generation (according to their performance on a validation set), SEGEN will evolve itself and create the next generation of unit modes with probabilistic genetic crossover and mutation, where the selection and crossover probabilities are highly dependent on their performance fitness evaluation. Finally, the learning of the data instances will be effectively combined from each unit model via diffusive propagation and ensemble learning strategies. These terms and techniques mentioned here will be explained in great detail in Section 4. Compared with the existing deep learning models, SEGEN have several great advantages, and we will illustrate them from both the bionics perspective and the computational perspective as follows. From the bionics perspective, SEGEN effectively models the evolution of creatures from generations to generations, where the creatures suitable for the environment will have a larger chance to survive and generate the offsprings. Meanwhile, the offsprings inheriting good genes from its parents will be likely to adapt to the environment as well. In the SEGEN model, each unit network model in generations can be treated as an independent creature, which will receive a different subsets of training instances and learn its own model variables. For the unit models suitable for the environment (i.e., achieving a good performance on a validation set), they will have a larger chance to generate their child models. The parent model achieving better performance will also have a greater chance to pass their variables to the child model. From the computational perspective, SEGEN requires far less data and resources, and also has a sound theoretic explanation of the learning process and . The unit models in each generation of SEGEN are of a much simpler architecture, learning of which can be accomplished with much less training data, less computational resources and less hyper-parameter tuning efforts. In addition, the training dataset pool, model hyper-parameters are shared by the unit models, and the increase of generation size (i.e., unit model number in each generation) or generation number (i.e., how many generation rounds will be needed) will not increase the learning resources consumption. The relatively "narrower" and "shallower" structure of unit models will also significantly enhance the interpretability of the unit models training process as well as the learning , especially if the unit models are the traditional non-deep learning models. Furthermore, the sound theoretical foundations of genetic algorithm and ensemble learning will also help explain the information inheritance through generations and ensemble in SEGEN. In this paper, we will use network embedding problem BID25 BID2; BID20 (applying autoencoder as the unit model) as an example to illustrate the SEGEN model. Meanwhile, applications of SEGEN on other data categories (e.g., images and raw feature inputs) with CNN and MLP as the unit model will also be provided in Section 5.3. The following parts of this paper are organized as follows. The problem formulation is provided in Section 3. Model SEGEN will be introduced in Section 4, whose performance will be evaluated in Section 5. Finally, Section 2 introduces the related works and we conclude this paper in Section 6. Deep Learning Research and Applications: The essence of deep learning is to compute hierarchical features or representations of the observational data BID8; BID16. With the surge of deep learning research and applications in recent years, lots of research works have appeared to apply the deep learning methods, like deep belief network BID11, deep Boltzmann machine BID22, Deep neural network; BID14 and Deep autoencoder model BID24, in various applications, like speech and audio processing BID7;, language modeling and processing BID0; BID19, information retrieval; BID22, objective recognition and computer vision BID16, as well as multimodal and multi-task learning BID27 BID28.Network Embedding: Network embedding has become a very hot research problem recently, which can project a graphstructured data to the feature vector representations. In graphs, the relation can be treated as a translation of the entities, and many translation based embedding models have been proposed, like TransE BID1, and. In recent years, many network embedding works based on random walk model and deep learning models have been introduced, like Deepwalk BID20, , node2vec BID9, and. Perozzi et al. extends the word2vec model BID18 to the network scenario and introduce the Deepwalk algorithm BID20. propose to embed the networks with LINE algorithm, which can preserve both the local and global network structures. BID9 introduce a flexible notion of a node's network neighborhood and design a biased random walk procedure to sample the neighbors. Chang et al. BID2 learn the embedding of networks involving text and image information. BID4 introduce a task guided embedding model to learn the representations for the author identification problem. In this section, we will provide the definitions of several important terminologies, based on which we will define the network representation learning problem. The SEGEN model will be illustrated based on the network representation learning problem in this paper, where the input is usually a large-sized network structured dataset. DEFINITION 1 (Network Data): Formally, a network structured dataset can be represented as a graph G = (V, E), where V denotes the node set and E contains the set of links among the nodes. In the real-world applications, lots of data can be modeled as networks. For instance, online social media can be represented as a network involving users as the nodes and social connections as the links; e-commerce website can be denoted as a network with customer and products as the nodes, and purchase relation as the links; academic bibliographical data can be modeled as a network containing papers, authors as the nodes, and write/cite relationships as the links. Given a large-sized input network data G = (V, E), a group of sub-networks can be extracted from it, which can be formally represented as a sub-network set of G. Step 1: Network SamplingStep 2: Sub-Network Representation LearningStep 3: Result Ensemble Figure 1: The SEGEN Framework. DEFINITION 2 (Sub-network Set): Based on a certain sampling strategy, we can represent the set of sampled subnetworks from network G as set G = {g 1, g 2, · · ·, g m} of size m. Here, g i ∈ G denotes a sub-network of G, and it can be represented as DISPLAYFORM0 In Section 4, we will introduce several different sampling strategies, which will be applied to obtained several different sub-network pools for unit model building and validation. Problem Statement: Based on the input network data G = (V, E), the network representation learning problem aims at learning a mapping f: V → R d to project each node from the network to a low-dimensional feature space. There usually exist some requirements on mapping f (·), which should preserve the original network structure, i.e., closer nodes should have close representations; while disconnected nodes have different representations on the other hand. In this section, we will introduce the proposed framework SEGEN in detail. As shown in Figure 1, the proposed framework involves three steps: network sampling, sub-network representation learning, and ensemble. Given the large-scale input network data, framework SEGEN will sample a set of sub-networks, which will be used as the input to the genetic evolutionary network model for representation learning. Based on the learned for the sub-networks, framework SEGEN will combine them together to obtain the final output . In the following parts, we will introduce these three steps in great detail respectively. In framework SEGEN, instead of handling the input large-scale network data directly, we propose to sample a subset (of set size s) of small-sized sub-networks (of a pre-specified sub-network size k) instead and learn the representation feature vectors of nodes based on the sub-networks. To ensure the learned representations can effectively represent the characteristics of nodes, we need to ensure the sampled sub-networks share similar properties as the original large-sized input network. As shown in Figure 1, 5 different types of network sampling strategies (indicated in 5 different colors) are adopted in this paper, and each strategy will lead to a group of small-sized sub-networks, which can capture both the local and global structures of the original network. 4.1.1 BFS BASED NETWORK SAMPLING Based on the input network G = (V, E), Breadth-First-Search (BFS) based network sampling strategy randomly picks a seed node from set V and performs BFS to expend to the unreached nodes. Formally, the neighbors of node v ∈ V can be denoted as set Γ(v; 1) = {u|u ∈ V ∧ (u, v) ∈ E}. After picking v, the sampling strategy will continue to randomly add k − 1 nodes from set Γ(v; 1), if |Γ(v; 1)| ≥ k − 1; otherwise, the sampling strategy will go to the 2-hop neighbors of v (i.e., Γ(v; 2) = {u|∃w ∈ V, (u, w) ∈ E ∧ (w, v) ∈ E ∧ (u, v) / ∈ E}) and so forth until the remaining k − 1 nodes are selected. In the case when the size of connected component that v involves in is smaller than k, the strategy will further pick another seed node to do BFS from that node to finish the sampling of k nodes. These sampled k nodes together with the edges among them will form a sampled sub-network g, and all the p sampled sub-networks will form the sub-network pool G BFS (parameter p denotes the pool size). 4.1.2 DFS BASED NETWORK SAMPLING Depth-First-Search (DFS) based network sampling strategy works in a very similar way as the BFS based strategy, but it adopts DFS to expand to the unreached nodes instead. Similar to the BFS method, in the case when the node connected component has size less than k, DFS sampling strategy will also continue to pick another node as the seed node to continue the sampling process. The sampled nodes together with the links among them will form the sub-networks to be involved in the final sampled sub-network pool G DFS (of size p).A remark to be added here: the sub-networks sampled via BFS can mainly capture the local network structure of nodes (i.e., the neighborhood), and in many of the cases they are star structured diagrams with the picked seed node at the center surrounded by its neighbors. Meanwhile, the sub-networks sampled with DFS are slightly different, which involve "deeper" network connection patterns. In the extreme case, the sub-networks sampled via DFS can be a path from the seed nodes to a node which is (k − 1)-hop away. To balance between those extreme cases aforementioned, we introduce a Hybrid-Search (HS) based network sampling strategy by combining BFS and DFS. HS randomly picks seed nodes from the network, and reaches other nodes based on either BFS or DFS strategies with probabilities p and (1 − p) respectively. For instance, in the sampling process, HS first picks node v ∈ V as the seed node, and samples a random node u ∈ Γ(v; 1). To determine the next node to sample, HS will "toss a coin" with p probability to sample nodes from Γ(v; 1) \ {u} (i.e., BFS) and 1 − p probability to sample nodes from Γ(u; 1) \ {v} (i.e., DFS). Such a process continues until k nodes are selected, and the sampled nodes together with the links among them will form the sub-network. We can represent all the sampled sub-networks by the HS based network sampling strategy as pool G HS.These three network sampling strategies are mainly based on the connections among the nodes, and nodes in the sampled sub-networks are mostly connected. However, in the real-world networks, the connections among nodes are usually very sparse, and most of the node pairs are not connected. In the following part, we will introduce two other sampling strategies to handle such a case. Instead of sampling sub-networks via the connections among them, the node sampling strategy picks the nodes at random from the network. Based on node sampling, the final sampled sub-network may not necessarily be connected and can involve many isolated nodes. Furthermore, uniform sampling of nodes will also deteriorate the network properties, since it treats all the nodes equally and fails to consider their differences. In this paper, we propose to adopt the biased node sampling strategy, where the nodes with more connections (i.e., larger degrees) will have larger probabilities to be sampled. Based on the connections among the nodes, we can represent the degree of node v ∈ V as d(u) = |Γ(u; 1)|, and the probabilities for u to be sampled can be denoted as DISPLAYFORM0 2|E|. Instead of focusing on the local structures of the network, the sub-networks sampled with the biased node sampling strategy can capture more "global" structures of the input network. Formally, all the sub-networks sampled via this strategy can be represented as pool G NS. Another "global" sub-network sampling strategy is the edge based sampling strategy, which samples the edges instead of nodes. Here, uniform sampling of edges will be reduced to biased node selection, where high-degree nodes will have a larger probability to be involved in the sub-network. In this paper, we propose to adopt a biased edge sampling strategy instead. For each edge (u, v) ∈ E, the probability for it to be sampled is actually proportional to DISPLAYFORM0. The sampled edges together with the incident nodes will form a sub-network, and all the sampled sub-networks with biased edge sampling strategy can be denoted as pool G ES.These two network sampling strategies can select the sub-structures of the input network from a global perspective, which can effectively capture the sparsity property of the input network. In the experiments to be introduced in Section 5, we will evaluate these different sampling strategies in detail. In this part, we will focus on introducing the Genetic Evolutionary Network (GEN) model, which accepts each sub-network pool as the input and learns the representation feature vectors of nodes as the output. We will use G to represent the sampled pool set, which can be DISPLAYFORM0 In the GEN model, there exist multiple generations of unit models, where the earlier generations will evolve and generate the later generations. Each generation will also involve a group of unit models, namely the unit model population. Formally, the initial generation of the unit models (i.e., the 1 st generation) can be represented as set DISPLAYFORM1 i is a base unit model to be introduced in the following subsection. Formally, the variables involved in each unit model, e.g., M 1 i, can be denoted as vector θ 1 i, which covers the weight and bias terms in the model (which will be treated as the model genes in the evolution to be introduced later). In the initialization step, the variables of each unit model are assigned with a random value generated from the standard normal distribution. In this paper, we will take network representation learning as an example, and propose to adopt the correlated autoencoder as the base model. We want to clarify again that the SEGEN framework is a general framework, and it works well for different types of data as well as different base models. For some other tasks or other learning settings, many other existing models, e.g., CNN and MLP to be introduced in Section 5.3, can be adopted as the base model as well. Autoencoder is an unsupervised neural network model, which projects data instances from the original feature space to a lower-dimensional feature space via a series of non-linear mappings. Autoencoder model involves two steps: encoder and decoder. The encoder part projects the original feature vectors to the objective feature space, while the decoder step recovers the latent feature representations to a reconstructed feature space. Based on each sampled sub-network g ∈ T, where g = (V g, E g), we can represent the sub-network structure as an adjacency matrix A g = {0, 1} |Vg|×|Vg|, where DISPLAYFORM0 be the corresponding latent feature representation of x i at hidden layers 1, 2, · · ·, o in the encoder step. The encoding in the objective feature space can be denoted as z i ∈ R d of dimension d. In the decoder step, the input will be the latent feature vector z i, and the final output will be the reconstructed vectorx i (of the same dimension as x i). The latent feature vectors at each hidden layers can be represented asŷ DISPLAYFORM1 As shown in the architecture in FIG1, the relationships among these variables can be represented with the following equations: DISPLAYFORM2 The objective of traditional autoencoder model is to minimize the loss between the original feature vector x i and the reconstructed feature vectorx i of data instances. Meanwhile, for the network representation learning task, the learning task of nodes in the sub-networks are not independent but highly correlated. For the connected nodes, they should have closer representation feature vectors in the latent feature space; while for those which are isolated, their latent representation feature vectors should be far away instead. What's more, since the input feature vectors are extremely sparse (lots of the entries are 0s), simply feeding them to the model may lead to some trivial solutions, like 0 vector for both z i and the decoded vectorx i. Therefore, we propose to extend the Autoencoder model to the correlated scenario for networks, and define the objective of the correlated autoencoder model as follows: DISPLAYFORM3 where DISPLAYFORM4 and α, β are the weights of the correlation and regularization terms respectively. Entries in weight vector b i have value 1 except the entries corresponding to non-zero element in x i, which will be assigned with value γ (γ > 1) to preserve these non-zero entries in the reconstructed vectorx i. Instead of fitting each unit model with all the sub-networks in the pool G, in GEN, a set of sub-network training batches T 1, T 2, · · ·, T m will be sampled for each unit model respectively in the learning process, where |T i | = b, ∀i ∈ {1, 2, · · ·, m} are of the pre-defined batch size b. These batches may share common sub-networks as well, i.e., T i ∩ T j may not necessary be ∅. In the GEN model, the unit models learning process for each generation involves two steps: generating the batches T i from the pool set G for each unit model M 1 i ∈ M 1, and learning the variables of the unit model M 1 i based on sub-networks in batch T i. Considering that the unit models have a much smaller number of hidden layers, the learning time cost of each unit model will be much less than the deeper models on larger-sized networks. In Section 5, we will provide a more detailed analysis about the running time cost and space cost of SEGEN. The unit models in the generation set M 1 can have different performance, due to different initial variable values, and different training batches in the learning process. In framework SEGEN, instead of applying "deep" models with multiple hidden layers, we propose to "deepen" the models in another way: "evolve the unit model into'deeper' generations". A genetic algorithm style method is adopted here for evolving the unit models, in which the well-trained unit models will have a higher chance to survive and evolve to the next generation. To pick the well-trained unit models, we need to evaluate their performance, which is done with the validation set V sampled from the pool. For each unit model M 1 k ∈ M 1, based on the sub-networks in set V, we can represent the introduced loss of the model as The probability for each unit model to be picked as the parent model for the crossover and mutation operations can be represented as DISPLAYFORM0 DISPLAYFORM1. In the real-world applications, a normalization of the loss terms among these unit models is necessary. For the unit model introducing a smaller loss, it will have a larger chance to be selected as the parent unit model. Considering that the crossover is usually done based a pair of parent models, we can represent the pairs of parent models selected from set M DISPLAYFORM0,···,m}, based on which we will be able to generate the next generation of unit models, i.e., M 2. For the k th pair of parent unit model (M DISPLAYFORM0, we can denote their genes as their variables θ 1 i, θ 1 j respectively (since the differences among the unit models mainly lie in their variables), which are actually their chromosomes for crossover and mutation. Crossover: In this paper, we propose to adopt the uniform crossover to get the chromosomes (i.e., the variables) of their child model. Considering that the parent models M 1 i and M 1 j can actually achieve different performance on the validation set V, in the crossover, the unit model achieving better performance should have a larger chance to pass its chromosomes to the child model. Formally, the chromosome inheritance probability for parent model M 1 i can be represented as DISPLAYFORM1 Meanwhile, the chromosome inheritance probability for model M DISPLAYFORM2 where indicator function 1(·) returns value 1 if the condition is True; otherwise, it returns value 0.Mutation: The variables in the chromosome vectorθ 2 k (l) ∈ R |θ 1 | are all real values, and some of them can be altered, which is also called mutation in traditional genetic algorithm. Mutation happens rarely, and the chromosome mutation probability is γ in the GEN model. Formally, we can represent the mutation indicator vector as m ∈ {0, 1} d, and the l th entry of vector θ 2 k after mutation can be represented as θ DISPLAYFORM3 where rand denotes a random value selected from range. Formally, the chromosome vector θ 2 k defines a new unit model with knowledge inherited form the parent models, which can be denoted as M 2 k. Based on the parent model set P 1, we can represent all the newly generated models as DISPLAYFORM4, which will form the 2 nd generation of unit models. Based on the models introduced in the previous subsection, in this part, we will introduce the hierarchical ensemble method, which involves two steps: local ensemble of for the sub-networks on each sampling strategies, and global ensemble of obtained across different sampling strategies. Based on the sub-network pool G obtained via the sampling strategies introduced before, we have learned the K th generation of the GEN model M K (or M for simplicity), which contains m unit models. In this part, we will introduce how to fuse the learned representations from each sub-networks with the unit models. Formally, given a sub-network g ∈ G with node set V g, by applying unit model M j ∈ M to g, we can represent the learned representation for node v q ∈ V g as vector z j,q, where q denotes the unique node index in the original complete network G before sampling. For the nodes v p / ∈ V g, we can denote its representation vector z j,p = null, which denotes a dummy vector of length d. Formally, we will be able represent the learned representation feature vector for node v q as DISPLAYFORM0 where operator denotes the concatenation operation of feature vectors. Considering that in the network sampling step, not all nodes will be selected in sub-networks. For the nodes v p / ∈ V g, ∀g ∈ G, we will not be able to learn its representation feature vector (or its representation will be filled with a list of dummy empty vector). Formally, we can represent these non-appearing nodes as set V n = V \ g∈G V g. In this paper, to compute the representation for these nodes, we propose to propagate the learned representation from their neighborhoods to them instead. Formally, given node v p ∈ V n and its neighbor set Γ(v p) = {v o |v o ∈ V ∧ (u, v p) ∈ E}, if there exists node in Γ(v p) with non-empty representation feature vector, we can represent the propagated representation for v p as DISPLAYFORM1 where N = vo∈Γ(vp) 1(v o / ∈ V n). In the case that Γ(v p) ⊂ V n, random padding will be applied to get the representation vector z p for node v p. Generally, these different network sampling strategies introduced at the beginning in Section 4.1 captures different local/global structures of the network, which will all be useful for the node representation learning. In the global ensemble step, we propose to group these features together as the output. Formally, based on the BFS, DFS, HS, biased node and biased edge sampling strategies, to differentiate their learned representations for nodes (e.g., v q ∈ V), we can denoted their representation feature vectors as z BFS q, z DFS q, z HS q, z NS q and z ES q respectively. In the case that node v q has never appeared in any sub-networks in any of the sampling strategies, its corresponding feature vector can be denoted as a dummy vector filled with 0s. In the global ensemble step, we propose to linearly sum the feature vectors to get the fuses representationz q as follows: DISPLAYFORM0 Learning of the weight parameters w BFS, w DFS, w HS, w NS and w ES is feasible with the complete network structure, but it may introduce extra time costs and greatly degrade the efficiency SEGEN. In this paper, we will simply assign them with equal value, i.e.,z q is an average of z In this section, we will analyze the proposed model SEGEN regarding its performance, running time and space cost, which will also illustrate the advantages of SEGEN compared with the other existing deep learning models. Model SEGEN, in a certain sense, can also be called a "deep" model. Instead of stacking multiple hidden layers inside one single model like existing deep learning models, SEGEN is deep since the unit models in the successive generations are generated by a namely "evolutionary layer" which performs the validation, selection, crossover, and mutation operations connecting these generations. Between the generations, these "evolutionary operations" mainly work on the unit model variables, which allows the immigration of learned knowledge from generation to generation. In addition, via these generations, the last generation in SEGEN can also capture the overall patterns of the dataset. Since the unit models in different generations are built with different sampled training batches, as more generations are involved, the dataset will be samples thoroughly for learning SEGEN. There have been lots of research works done on analyzing the convergence, performance bounds of genetic algorithms BID21, which can provide the theoretic foundations for SEGEN. Due to the difference in parent model selection, crossover, mutation operations and different sampled training batches, the unit models in the generations of SEGEN may perform quite differently. In the last step, SEGEN will effectively combine the learning from the multiple unit models together. With the diverse combined from these different learning models, SEGEN is able to achieve better performance than each of the unit models, which have been effectively demonstrated in BID31. According the the model descriptions provided in Section 4, we summarize the key parameters used in SEGEN as follows, which will help analyze its space and time complexity.• Sampling: Original data size: n. Sub-instance size: n. Pool size: p. Here, we will use network structured data as an example to analyze the space and time complexity of the SEGEN model. Space Complexity: Given a large-scale network with n nodes, the space cost required for storing the whole network in a matrix representation is O(n 2). Meanwhile, via network sampling, we can obtain a pool of sub-networks, and the space required for storing these sub-networks takes O p(n) 2. Generally, in application of SEGEN, n can take very small number, e.g., 50, and p can take value p = c · n n (c is a constant) so as to cover all the nodes in the network. In such a case, the space cost of SEGEN will be linear to n, O(cn n), which is much smaller than O(n 2).Time Complexity: Depending on the specific unit models used in composing SEGEN, we can represent the introduced time complexity of learn one unit model with the original network with n nodes as O(f (n)), where f (n) is usually a highorder function. Meanwhile, for learning SEGEN on the sampled sub-networks with n nodes, all the introduced time cost will be DISPLAYFORM0, where term d · n (an approximation to variable size) represents the cost introduced in the unit model crossover and mutation about the model variables. Here, by assigning b with a fixed value b = c · n n, the time complexity of SEGEN will be reduced to O Kmc f (n) n · n + Kmdn, which is linear to n. Compared with existing deep learning models based on the whole dataset, the advantages of SEGEN are summarized below:• Less Data for Unit Model Learning: For each unit model, which are of a "shallow" and "narrow" structure (shallow: less or even no hidden layers, narrow: based on sampled sub-instances with a much smaller size), which needs far less variables and less data for learning each unit model. • Less Computational Resources: Each unit model is of a much simpler structure, learning process of which consumes far less computational resources in both time and space costs.• Less Parameter Tuning: SEGEN can accept both deep (in a simpler version) and shallow learning models as the unit model, and the hyper-parameters can also be shared among the unit models, which will lead to far less hyper-parameters to tune in the learning process.• Sound Theoretic Explanation: The unit learning model, genetic algorithm and ensemble learning (aforementioned) can all provide the theoretic foundation for SEGEN, which will lead to sound theoretic explanation of both the learning and the SEGEN model itself. To test the effectiveness of the proposed model, extensive experiments will be done on several real-world network structured datasets, including social networks, images and raw feature representation datasets. In this section, we will first introduce the detailed experimental settings, covering experimental setups, comparison methods, evaluation tasks and metrics for the social network representation learning task. After that, we will show its convergence analysis, parameter analysis and the main experimental of SEGEN on the social network datasets. Finally, we will provide the experiments SEGEN based on the image and raw feature representation datasets involving CNN and MLP as the unit models respectively. The network datasets used in the experiments are crawled from two different online social networks, Twitter and Foursquare, respectively. The Twitter network dataset involves 5, 120 users and 130, 576 social connections among the user nodes. Meanwhile, the Foursquare network dataset contains 5, 392 users together with the 55, 926 social links connecting them. According to the descriptions of SEGEN, based on the complete input network datasets, a set of sub-networks are randomly sampled with network sampling strategies introduced in this paper, where the sub-network size is denoted as n, and the pool size is controlled by p. Based on the training/validation batches sampled sub-network pool, K generations of unit models will be built in SEGEN, where each generation involves m unit models (convergence analysis regarding parameter K is available in Section 7.1.1). Finally, the learning at the ending generation will be effectively combined to generate the ensemble output. For the nodes which have never been sampled in any sub-networks, their representations can be learned with the diffusive propagation from their neighbor nodes introduced in this paper. The learned by SEGEN will be evaluated with two application tasks, i.e., network recovery and community detection respectively. The detailed parameters sensitivity analysis is also available in Section 7.1.2. The network representation learning comparison models used in this paper are listed as follows FORMULA18 • SEGEN: Model SEGEN proposed in this paper is based on the genetic algorithm and ensemble learning, which effectively combines the learned sub-network representation feature vectors from the unit models to generate the feature vectors of the whole network.• LINE: The LINE model is a scalable network embedding model proposed in, which optimizes an objective function that preserves both the local and global network structures. LINE uses a edge-sampling algorithm to addresses the limitation of the classical stochastic gradient descent.• DEEPWALK: The DEEPWALK model BID20 extends the word2vec model BID18 to the network embedding scenario. DEEPWALK uses local information obtained from truncated random walks to learn latent representations.• NODE2VEC: The NODE2VEC model BID9 introduces a flexible notion of a node's network neighborhood and design a biased random walk procedure to sample the neighbors for node representation learning.• HPE: The HPE model is originally proposed for learning user preference in recommendation problems, which can effectively project the information from heterogeneous networks to a low-dimensional space. The network representation learning can hardly be evaluated directly, whose evaluations are usually based on certain application tasks. In this paper, we propose to use application tasks, network recovery and clustering, to evaluate the learned representation features from the comparison methods. Furthermore, the network recovery are evaluated by metrics, like AUC and Precision@500. Meanwhile the clustering are evaluated by Density and Silhouette. Without specific remarks, the default parameter setting for SEGEN in the experiments will be Parameter Setting 1 (PS1): sub-network size: 10, pool size: 200, batch size: 10, generation unit model number: 10, generation number: 30. The model training convergence analysis, and detailed analysis about the pool sampling and model learning parameters is available in the Appendix in Section 7.1. Besides these analysis , we also provide the performance analysis of SEGEN and baseline methods in TAB0, where the parameter settings are specified next to the method name. We provide the rank of method performance among all the methods, which are denoted by the numbers in blue font, and the top 5 are in a bolded font. As shown in the Tables, we have the network recovery and community detection on the left and right sections respectively. For the network recovery task, we change the ratio of negative links compared with positive links with values {1, 5, 10}, which are evaluated by the metrics AUC and Prec@500. For the community detection task, we change the number of clusters with values {5, 25, 50}, and the are evaluated by the metrics Density and Silhouette. Besides PS1 introduced at the beginning of Section 5.1, we have 4 other parameter settings selected based on the parameter analysis introduced before. PS2 for network recovery on Foursquare: sub-network size 50, pool size 600, batch size 5, generation size 50. PS3 for community detection on Foursquare: sub-network size 25, pool size 300, batch size 35, generation size 5. PS4 for network recovery on Twitter: sub-network size 50, pool size 700, batch size 10, generation size 5. PS5 for community detection on Twitter: sub-network size 45, pool size 500, batch size 50, generation size 5.According to the shown in TAB0, method SEGEN with PS2 can obtain very good performance for both the network recovery task and the community detection task. For instance, for the network recovery task, method SEGEN with PS2 achieves 0.909 AUC score, which ranks the second and only lose to SEGEN-HS with PS2; meanwhile, SEGEN with PS2 also achieves the second highest Prec@500 score (i.e., 0.872 for np-ratio = 1) and the third highest Prec@500 score (i.e., 0.642 and 0.530 for np-ratios 5 and 10) among the comparison methods. On the other hand, for the community detection task, SEGEN with PS3 can generally rank the second/third among the comparison methods for both density and silhouette evaluation metrics. For instance, with the cluster number is 5, the density obtained by SEGEN ranks the second among the methods, which loses to SEGEN-LS only. Similar can be observed for the Twitter network as shown in FIG1.By comparing SEGEN with SEGEN merely based on HS, BFS, DFS, NS, LS, we observe that the variants based on one certain type of sampling strategies can obtain relatively biased performance, i.e., good performance for the network recovery task but bad performance for the community detection task or the reverse. For instance, as shown in Figure 1, methods SEGEN with HS, BFS, DFS performs very good for the network recovery task, but its performance for the community detection ranks even after LINE, HPE and DEEPWALK. On the other hand, SEGEN with NS and LS is shown to perform well for the community detection task instead in Figure 1, those performance ranks around 7 for the network recovery task. For the Twitter network, similar biased can be observed but the are not identically the same. Model SEGEN combining these different sampling strategies together achieves relatively balanced and stable performance for different tasks. Compared with the baseline methods LINE, HPE, DEEPWALK and NODE2VEC, model SEGEN can obtain much better performance, which also demonstrate the effectiveness of SEGEN as an alternative approach for deep learning models on network representation learning. Besides the extended autoencoder model and the social network datasets, we have also tested the effectiveness of SEGEN on other datasets and with other unit models. In TAB2, we show the experimental of SEGEN and other baseline methods on the MNIST hand-written image datasets. The dataset contains 60, 000 training instances and 10, 000 testing instances, where each instance is a 28 × 28 image with labels denoting their corresponding numbers. Convolutional Neural Network (CNN) is used as the unit model in SEGEN, which involves 2 convolutional layers, 2 max-pooling layers, and two fully connection layers (with a 0.2 dropout rate). ReLU is used as the activation function in CNN, and we adopt Adam as the optimization algorithm. Here, the images are of a small size and no sampling is performed, while the learning of the best unit model in the ending generation (based on a validation batch) will be outputted as the final . In the experiments, SEGEN (CNN) is compared with several classic methods (e.g., LeNet-5, SVM, Random Forest, Deep Belief Net) and state-of-the-art method (gcForest). According to the , SEGEN (CNN) can outperform the baseline methods with great advantages. The Accuracy rate obtained by SEGEN is 99.37%, which is much higher than the other comparison methods. Meanwhile, in TAB3, we provide the learning on three other benchmark datasets, including YEAST 1, ADULT 2 and LETTER 3. These three datasets are in the traditional feature representations. Multi-Layer Perceptron (MLP) is used as the unit model in SEGEN for these three datasets. We cannot find one unified architecture of MLP, which works for all these three datasets. In the experiments, for the YEAST dataset, the MLP involves 1 input layer, 2 hidden layers and 1 output layers, whose neuron numbers are 8-64-16-10; for the ADULT, the MLP architecture contains the neurons 14-70-50-2; for the LETTER dataset, the used MLP has 3 hidden layers with neurons 16-64-48-32-26 at each layer respectively. The Adam optimization algorithm with 0.001 learning rate is used to train the MLP model. For the ensemble strategy in these experiments, the best unit model is selected to generate the final prediction output. According to the , compared with the baseline methods, SEGEN (MLP) can also perform very well with MLP on the raw feature representation datasets with great advantages, especially the YEAST and ADULT datasets. As to the LETTER dataset, SEGEN (MLP) only loses to gcForest, but can outperform the other methods consistently. In this paper, we have introduced an alternative approach to deep learning models, namely SEGEN. Significantly different from the existing deep learning models, SEGEN builds a group of unit models generations by generations, instead of building one single model with extremely deep architectures. The choice of unit models covered in SEGEN can be either traditional machine learning models or the latest deep learning models with a "smaller" and "narrower" architecture. SEGEN has great advantages over deep learning models, since it requires much less training data, computational resources, parameter tuning efforts but provides more information about its learning and integration process. The effectiveness of efficiency of SEGEN have been well demonstrated with the extensive experiments done on the real-world network structured datasets. In this part, we will provide experimental analysis about the convergence and parameters of SEGEN, including the subnetwork size, the pool size, batch size and generation size respectively. According to the plots, for the Foursquare network, larger sub-network size and larger pool size will lead to better performance in the network recovery task; meanwhile, smaller sub-network size will achiver better performance for the community detection task. For instance, SEGEN can achieve the best performance with sub-network size 50 and pool size 600 for the network recovery task; and SEGEN obtain the best performance with sub-network size 25 and pool size 300 for the community detection. For the Twitter network, the performance of SEGEN is relatively stable for the parameters analyzed, which has some fluctuations for certain parameter values. According to the , the optimal sub-network and pool sizes parameter values for the network recovery task are 50 and 700 for the network recovery task; meanwhile, for the community detection task, the optimal parameter values are 45 and 500 respectively. FIG8, we provide the parameter sensitivity analysis about the batch size and generation size (i.e., the number of unit models in each generation) on Foursquare and Twitter. We change the generation size and batch size both with values in {5, 10, 15, · · ·, 50}, and compute the AUC, Prec@500, Density and Silhouette scores obtained by SEGEN. According FIG8, batch size has no significant impact on the performance of SEGEN, and the generation size may affect SEGEN greatly, especially for the Prec@500 metric (the AUC obtained by SEGEN changes within range [0.81, 0.82] with actually minor fluctuation in terms of the values). The selected optimal parameter values selected for network recovery are 50 and 5 for generation and bath sizes. Meanwhile, for the community detection, SEGEN performs the best with smaller generation and batch size, whose optimal values are 5 and 35 respectively. For the Twitter network, the impact of the batch size and generation size is different from that on Foursquare: smaller generation size lead to better performance for SEGEN evaluated by Prec@500. The fluctuation in terms of AUC is also minor in terms of the values, and the optimal values of the generation size and batch size parameters for the network recovery task are 5 and 10 respectively. For the community detection task on Twitter, we select generation size 5 and batch size 40 as the optimal value. | We introduce a new representation learning model, namely “Sample-Ensemble Genetic Evolutionary Network” (SEGEN), which can serve as an alternative approach to deep learning models. | 676 | scitldr |
How can we teach artificial agents to use human language flexibly to solve problems in a real-world environment? We have one example in nature of agents being able to solve this problem: human babies eventually learn to use human language to solve problems, and they are taught with an adult human-in-the-loop. Unfortunately, current machine learning methods (e.g. from deep reinforcement learning) are too data inefficient to learn a language in this way. An outstanding goal is finding an algorithm with a suitable ‘language learning prior’ that allows it to learn human language, while minimizing the number of required human interactions. In this paper, we propose to learn such a prior in simulation, leveraging the increasing amount of available compute for machine learning experiments. We call our approach Learning to Learn to Communicate (L2C). Specifically, in L2C we train a meta-learning agent in simulation to interact with populations of pre-trained agents, each with their own distinct communication protocol. Once the meta-learning agent is able to quickly adapt to each population of agents, it can be deployed in new populations unseen during training, including populations of humans. To show the promise of the L2C framework, we conduct some preliminary experiments in a Lewis signaling game, where we show that agents trained with L2C are able to learn a simple form of human language (represented by a hand-coded compositional language) in fewer iterations than randomly initialized agents. In this paper, we propose to learn such a prior in simulation, leveraging the increasing amount of available compute for machine learning experiments BID0. We call our approach Learning to Learn to Communicate (L2C). Specifically, in L2C we train a meta-learning agent in simulation to interact with populations of pre-trained agents, each with their own distinct communication protocol. Once the meta-learning agent is able to quickly adapt to each population of agents, it can be deployed in new populations, including populations speaking human language. Our key insight is that such populations can be obtained via self-play, after pre-training agents with imitation learning on a small amount of off-policy human language data. We call this latter technique Seeded Self-Play (S2P). Our preliminary experiments show that agents trained with L2C and S2P need fewer on-policy samples to learn a compositional language in a Lewis signaling game. Language is one of the most important aspects of human intelligence; it allows humans to coordinate and share knowledge with each other. We will want artificial agents to understand language as it is a natural means for us to specify their goals. So how can we train agents to understand language? We adopt the functional view of language BID16 that has recently gained popularity (8; 14): agents understand language when they can use language to carry out tasks in the real world. One approach to training agents that can use language in their environment is via emergent communication, where researchers train randomly initialized agents to solve tasks requiring communication (7; 16). An open question in emergent communication is how the ing communication protocols can be transferred to learning human language. Existing approaches attempt to do this using auxiliary tasks, for example having agents predict the label of an image in English while simultaneously playing an image-based referential game BID11. While this works for learning the names of objects, it's unclear if simply using an auxiliary loss will scale to learning the English names of complex concepts, or learning to use English to interact in an grounded environment. One approach that we know will work (eventually) for training language learning agents is using a human-in-the-loop, as this is how human babies acquire language. In other words, if we had a good enough model architecture and learning algorithm, the human-in-the-loop approach should work. However, recent work in this direction has concluded that current algorithms are too sample inefficient to effectively learn a language with compositional properties from humans. Human guidance is expensive, and thus we would want such an algorithm to be as sample efficient as possible. An open problem is thus to create an algorithm or training procedure that in increased sampleefficiency for language learning with a human-in-the-loop. In this paper, we present the Learning to Learn to Communicate (L2C) framework, with the goal of training agents to quickly learn new (human) languages. The core idea behind L2C is to leverage the increasing amount of available compute for machine learning experiments to learn a'language learning prior' by training agents via meta-learning in Figure 1. Diagram of the L2C framework. An advantage of L2C is that agents can be trained in an external environment (which grounds the language), where agents interact with the environment via actions and language. Thus, (in theory) L2C could be scaled to learn complicated grounded tasks involving language.simulation. Specifically, we train a meta-learning agent in simulation to interact with populations of pre-trained agents, each with their own distinct communication protocol. Once the meta-learning agent is able to quickly adapt to each population of agents, it can be deployed in new populations unseen during training, including populations of humans. The L2C framework has two main advantages: permits for agents to learn language that is grounded in an environment with which the agents can interact (i.e. it is not limited to referential games); and in contrast with work from the instruction following literature, agents can be trained via L2C to both speak (output language to help accomplish their goal) and listen (map from the language to a goal or sequence of actions).To show the promise of the L2C framework, we provide some preliminary experiments in a Lewis signaling game BID3. Specifically, we show that agents trained with L2C are able to learn a simple form of human language (represented by a hand-coded compositional language) in fewer iterations than randomly initialized agents. These preliminary suggest that L2C is a promising framework for training agents to learn human language from few human interactions. L2C is a training procedure that is composed of three main phases: Training agent populations: Training populations of agents to solve some task (or set of tasks) in an environment. Train meta-learner on agent populations: We train a meta-learning agent to'perform well' (i.e. achieve a high reward) on tasks in each of the training populations, after a small number of updates. Testing the meta-learner: testing the meta-learning agent's ability to learn new languages, which could be both artificial (emerged languages unseen during training) or human. A diagram giving an overview of the L2C framework is shown in Figure 1. Phase 1 can be achieved in any number of ways, either through supervised learning (using approximate backpropogation) or via reinforcement learning (RL). Phases 2 and 3 follow the typical meta-learning set-up: to conserve space, we do not replicate a formal description of the meta-learning framework, but we direct interested readers to Section 2.1 of. In our case, each'task' involves a separate population of agents with its own emergent language. While meta-training can also be performed via supervised learning or RL, Phase 3 must be done using RL, as it involves interacting with humans which cannot be differentiated through. See Section 6 for a discussion of each of these phases in more detail. Seeded self-play (S2P) is a straightforward technique for training agents in simulation to develop complex behaviours. The idea is to'seed' a population of agents with complexWe collect some data which is sampled from a fixed seed population. This corresponds to the actual number of samples that we care about i.e. which is basically the number of human demonstrations. We first train each agent (a listener and a speaker) that performs well on these human samples. We call this step as the imitation-learning step. Then we take each of these trained agents (a pair of speaker and a listener) and deploy them against each other to solve the task via emergent communication. We call this step as the fine-tuning step. While these agents are exchanging messages in their emergent language, we make sure that the language does not diverge too much form the original language (i.e. the language of the fixed seed population). We enforce this by having a schedule over the fine-tuning and the imitation-learning steps such that both the agents are able to solve the task while also keeping a perfect accuracy over the seed data. We call this process of generating populations as seeded self-play (S2P). A speaker-listener game We construct a referential game similar to the Task & Talk game from, except with a single turn. The game is cooperative and consists of 2 agents, a speaker and a listener. The speaker agent observes an object with a certain set of properties, and must describe the object to the listener using a sequence of words (represented by one-hot vectors). The listener then attempts to reconstruct the object. More specifically, the input space consists of p properties (e.g. shape, color) and t types per property (e.g. triangle, square). The speaker observes a symbolic repre-sentation of the input x, consisting of the concatenation of p one-hot vectors, each of length t. The number of possible inputs scales as t p. We define the vocabulary size (length of each one-hot vector sent from the speaker) as |V |, and fix the number of words sent to be w. Developing a compositional language To simulate a simplified form of human language on this task, we programatically generate a perfectly compositional language, by assigning each'concept' (each type of each property) a unique symbol. In other words, to describe a blue shaded triangle, we create a language where the output description would be "blue, triangle, shaded", in some arbitrary order and without prepositions. By'unique symbol', we mean that no two concepts are assigned the same word. By generating this language programmatically, we avoid the need to have humans in the loop for testing, which allows us to iterate much more quickly. This is feasible because of the simplicity of our speaker-listener environment; we do not expect that generating these programmatic languages is practical when scaling to more complex environments. We call these agents compositional bots (CBs). The message produced by the speaker is a sequence of p categorical random variables which are discretized using Gumbel-Softmax with an initial temperature τ = 1. We set the vocabulary size to be equal to the total number of concepts p · t. In our initial experiments, we train our agents using Cross Entropy which is summed over each property p. We use the Adam optimizer with a learning rate of 0.001. We first demonstrate the with a meta-learning listener (a meta-listener), that learns from the different speakers of each training population. In Section 5 below, we perform L2C experiments varying the type and number of training populations, speaker and listener parameterizations, and listener meta-learning algorithms. Unless otherwise specified, we use an overparameterized linear policy (i.e. an MLP with linear activations and 500 'hidden units') for the speaker and listener, and the meta-listener is trained with a first-order version of MAML on 50 purely compositional training populations (CBs). Here, we describe our initial into the factors affecting the performance of L2C. Since our ultimate goal is to transfer to learning a human language in as few human interactions as possible, we measure success based on the number of samples required for the meta-learner to reach a certain performance level (95%) on a held-out test population, and we permit ourselves as much computation during pre-training as necessary. Varying meta-learning algorithms We experimented with several algorithms to train our meta-listener: a randomly initialized agent, an agent pre-trained on all populations that performs n = 1 update per population, the Reptile algorithm that performs n > 1 updates per population, and a first-order variant of MAML (FOMAML) BID5. In our initial experiments, we found that the Reptile and the pre-training agent didn't improve significantly over the random initialization baseline. However, when we have enough training populations, the FOMAML algorithm improved significantly over the randomly initialized baseline, and even approaches the minimum number of examples a human would need to solve the task (60, one for each word in the vocabulary) -see FIG0.Varying listener parameterizations We tried various models for the meta-listener, including an LSTM, an MLP, and a linear model. While L2C with FOMAML worked in all of these cases, we found the best with an over-parameterized linear model (a 1-layer MLP with linear activations). Strangely, the performance improved even further when adding a We suspect the over-parameterization helped (over a regular linear model) due to improved gradient descent dynamics. Varying the number of training populations As can be inferred from FIG0, having more training populations improves performance. Having too few training populations (eg: 5 train encoders) in overfitting to the set of training populations and as the meta-learning progresses, the model performs worse on the test populations. For more than 10 training encoders, models trained with L2C require fewer samples to generalize to a held-out test population than a model not trained with L2C. We wanted to see if we can further reduce the number of samples required after L2C. So instead of doing L2C on a population of compositional bots, we train the population of agents using Seeded self-play (S2P). We collect some seed data from a single compositional bot which we call as seed dataset. Now we partition this data into train and test sets where the train set is used to train the agents via S2P. This set of trained populations is now used as the set of populations for meta-training (L2C).We were able to get the number of samples from 60 in the best performing vanilla L2C case to 20 in the L2C with S2P case showing that the seeded self-play indeed helps in improving sample efficiency. There are several immediate directions for future work: training the meta-agent via RL rather than supervised learning, and training the meta-agent as a joint speaker-listener (i.e. taking turns speaking and listening), as opposed to only listening. We also want to scale L2C training to more complicated tasks involving grounded language learning, such as the Talk the Walk dataset, which involves two agents learning to navigate New York using language. More broadly, there are still many challenges that remain for the L2C framework. In fact, there are unique problems that face each of the phases described in Section 2. In Phase 1, how do we know we can train agents to solve the tasks we want? Recent work has shown that learning emergent communication protocols is very difficult, even for simple tasks BID12. This is particularly true in the multiagent reinforcement learning (RL) setting, where deciding on a communication protocol is difficult due to the nonstationary and high variance of the gradients BID12. This could be addressed in at least two ways: by assuming the environment is differentiable, and backpropagating gradients using a stochastic differentiation procedure (9; 14), or by'seeding' each population with a small number of human demonstrations. Point is feasible because we are training in simulation, and we have control over how we build that simulation -in short, it doesn't really matter how we get our trained agent populations, so long as they are useful for the meta-learner in Phase 2.In Phase 2, the most pertinent question is: how can we be sure that a small number of updates is sufficient for a meta-agent to learn a language it has never seen before?The short answer is that it doesn't need to completely learn the language in only a few updates; rather it just needs to perform better on the language-task in the host population after a few updates, in order to provide a useful training signal to the meta-learner. For instance, it has been shown that the model agnostic meta-learning (MAML) algorithm can perform well when multiple gradient steps are taken at test time, even if it is only trained with a single inner gradient step. Another way to improve the meta-learner performance is to provide a dataset of agent interactions for each population. In other words, rather than needing to metalearner perform well after interacting with a population a few times, we'd like it to perform well after doing some supervised learning on this dataset of language, and after a few interactions. After all, we do have lots of available datasets of human language, and not using this seems like a waste. Finally, in Phase 3, how do we know that the meta-learner will be able to generalize to learn a human language? This comes down to the similarity between the training populations and human language, and the diversity of the training populations. For instance, we expect certain properties like compositionality to be important in the training populations for transferring to human languages, which are inherently compositional. But the training languages may not need to be very close to human language; as a comparison, significant progress has been made in transferring robots from simulation the real world using domain randomization, i.e. by adding random image textures during training. Even though these image textures look nothing like the real world, it helps improve robustness of the learner, which allows it to generalize. Providing a detailed examination of the required properties of the trained population languages, such that a meta-learner is able to generalize to human language, is an important direction for future work. | We propose to use meta-learning for more efficient language learning, via a kind of 'domain randomization'. | 677 | scitldr |
We study the problem of fitting task-specific learning rate schedules from the perspective of hyperparameter optimization. This allows us to explicitly search for schedules that achieve good generalization. We describe the structure of the gradient of a validation error w.r.t. the learning rates, the hypergradient, and based on this we introduce a novel online algorithm. Our method adaptively interpolates between two recently proposed techniques , featuring increased stability and faster convergence. We show empirically that the proposed technique compares favorably with baselines and related methodsin terms of final test accuracy. Learning rate (LR) adaptation for first-order optimization methods is one of the most widely studied aspects in optimization for learning methods -in particular neural networks -with early work dating back to the origins of connectionism . More recent work focused on developing complex schedules that depend on a small number of hyperparameters (; Orabona & Pál, 2016). Other papers in this area have focused on the optimization of the (regularized) training loss (; ;). While quick optimization is desirable, the true goal of supervised learning is to minimize the generalization error, which is commonly estimated by holding out part of the available data for validation. Hyperparameter optimization (HPO), a related but distinct branch of the literature, specifically focuses on this aspect, with less emphasis on the goal of rapid convergence on a single task. Research in this direction is vast (see for an overview) and includes model-based , model-free , and gradientbased approaches. Additionally, works in the area of learning to optimize have focused on the problem of tuning parameterized optimizers on whole classes of learning problems but require prior expensive optimization and are not designed to speed up training on a single specific task. The goal of this paper is to automatically compute in an online fashion a learning rate schedule for stochastic optimization methods (such as SGD) only on the basis of the given learning task, aiming at producing models with associated small validation error. We study the problem of finding a LR schedule under the framework of gradient-based hyperparameter optimization : we consider as an optimal schedule η * = (η * 0, . . ., η * T −1) ∈ R T + a solution to the following constrained optimization problem min{f T (η) = E(w T (η)): η ∈ R T + } s.t. w 0 =w, w t+1 (η) = Φ t (w t (η), η t ) for t = {0, . . ., T − 1} = [T], where E: R d → R + is an objective function, Φ t: is a (possibly stochastic) weight update dynamics,w ∈ R d represents the initial model weights (parameters) and finally w t are the weights after t iterations. We can think of E as either the training or the validation loss of the model, while the dynamics Φ describe the update rule (such as SGD, SGD-Momentum, Adam etc.). For example in the case of SGD, Φ t (w t, η t) = w t − η t ∇L t (w t), with L t (w t) the (possibly regularized) training loss on the t-th minibatch. The horizon T should be large enough so that the training error can be effectively minimized, in order to avoid underfitting. Note that a too large value of T does not necessarily harm since η k = 0 for k >T is still a feasible solution, implementing early stopping in this setting. Under review as a conference paper at ICLR 2020 Problem can be in principle solved by any HPO technique. However, most HPO techniques, including those based on hypergradients or on a bilevel programming formulation would not be suitable for the present purpose since they require multiple evaluations of f (which, in turn, require executions of the weight optimization routine), thus defeating one of the main goals of determining LR schedules, i.e. speed. In fact, several other researchers (; ; ; ; ;) have investigated related solutions for deriving greedy update rules for the learning rate. A common characteristic of methods in this family is that the LR update rule does not take into account information from the future. At a high level, we argue that any method should attempt to produce updates that approximate the true and computationally unaffordable hypergradient of the final objective with respect to the current learning rate (in relation to this, discusses the bias deriving from greedy or short-horizon optimization). In practice, different methods resort to different approximations or explicitly consider greedily minimizing the performance after a single parameter update (; ;). The type of approximation and the type of objective (i.e. the training or the validation loss) are in principle separate issues although comparative experiments with both objectives and the same approximation are never reported in the literature and validation loss is only used in the experiments reported in. One additional aspect needs to be taken into account: even when the (true or approximate) hypergradient is available, one still needs to introduce additional hyper-hyperparameters in the design of the online learning rate adaptation algorithm. For example in , hyper-hyperparameters include initial value of the learning rate η 0 and the hypergradient learning rate β. We find in our experiments that can be quite sensitive to the choice of these constants. In this work, we make a step forward in understanding the behavior of online gradient-based hyperparameter optimization techniques by (i) analyzing in Section 2 the structure of the true hypergradient that could be used to solve Problem if wall-clock time was not a concern, and (ii) by studying in Section 3 some failure modes of previously proposed methods along with a detailed discussion of the type of approximations that these methods exploit. In Section 4, based on these considerations, we develop a new hypergradient-based algorithm which reliably produces competitive learning rate schedules aimed at lowering the final validation error. The algorithm, which we call MARTHE (Moving Average Real-Time Hyperparameter Estimation), has a moderate computational cost and can be interpreted as a generalization of the algorithms described in and. Unlike previous proposals, MARTHE is almost parameter-free in that it incorporates heuristics to automatically tune its configuration parameters (i.e. hyper-hyperparameters). In Section 5, we empirically compare the quality of different hypergradient approximations in a small scale task where true hypergradient can be exactly computed. In Section 6, we present a set of real world experiments showing the validity of our approach. We finally discuss potential future applications and research directions in Section 7. We study the optimization problem under the perspective of gradient-based hyperparameter optimization, where the learning rate schedule η = (η 0, . . ., η T −1) is treated as a vector of hyperparameters and T is a fixed horizon. Please refer to Appendix A for a summary of the main notation used throughout the paper. Since the learning rates are positive real-valued quantities, assuming both E and Φ are smooth functions, we can compute the gradient of f ∈ R T, which is given by where " " means transpose. The total derivativeẇ T can be computed iteratively with forward-mode algorithmic differentiation aṡ The Jacobian matrices A t and B t depend on w t and η t, but we will leave these dependencies implicit to ease our notation. In the case of SGD 1, A t = I − η t H t (w t) and [B t] j = −δ tj ∇L t (w t), where subscripts denote columns (starting from 0), δ tj = 1 if t = j and 0 otherwise and H t is the Hessian of the training error on the t−th mini-batch 2. We also note that, given the high dimensionality of η, reverse-mode differentiation would in a more efficient (running-time) implementation. We use here forward-mode both because it is easier to interpret and visualize and also because it is closely related to the computational scheme behind MARTHE, as we will show in Section 4. Finally, we note that stochastic approximations of Eq. may be obtained with randomized telescoping sums or hyper-networks based stochastic approximations . Eq. 3 describes the so-called tangent system which is a discrete affine time-variant dynamical system that measures how the parameters of the model would change for infinitesimal variations of the learning rate schedule, after t iterations of the optimization dynamics. Notice that the "translation matrices" B t are very sparse, having, at any iteration, only one non-zero column. This means that [ẇ t] j remains 0 for all j ≥ t: η t affects only the future parameters trajectory. Finally, for a learning rate η t, the derivative (a scalar) is where the last equality holds true for SGD. Eq. can be read as the scalar product between the gradients of the training error at the t-th step and the objective E at the final iterate, transformed by the accumulated (transposed) Jacobians of the optimization dynamics, shorthanded by P T −1 t+1. As it is apparent from Eq., given w t, the hypergradient of η t is affected only by the future trajectory and does not depend explicitly on η t. In its original form, where each learning rate is left free to take any permitted value, Eq. represents a highly nonlinear setup. Although, in principle, it could be solved by projected gradient descent, in practice it is unfeasible even for small problems: evaluating the gradient with forward-mode is inefficient in time, since it requires maintaining a (large) matrix tangent system. Evaluating it with reverse-mode is inefficient in memory, since the entire weight trajectory (w i) T i=0 should be stored. Furthermore, it can be expected that several updates of η are necessary to reach convergence where each update requires computation of f T and the entire parameter trajectory in the weight space. Since this approach is computationally very expensive, we turn out attention to online updates where η t is required to be updated online based only on trajectory information up to time t. Before developing and motivating our proposed technique, we discuss two previous methods to compute the learning rate schedule online. The real-time hyperparameter optimization (RTHO) algorithm suggested in , reminiscent of stochastic meta-descent , is based on forward-mode differentiation and uses information from the entire weight trajectory by accumulating partial hypergradients. Hypergradient descent (HD), proposed in and closely related to the earlier work of , aims at minimizing the loss w.r.t. the learning rate after one step of optimization dynamics. It uses information only from the past and current iterate. Both methods implement update rules of the type where ∆η t is an online estimate of the hypergradient, β > 0 is a step-size or hyper-learning rate and the max ensures positivity 3. To ease the discussion, we omit the stochastic (mini-batch) evaluation of the training error L and possibly of the objective E. The update rules 4 are given by for RTHO and HD respectively, where P t−1 t:= I. As it can be seen ): the correction term r can be interpreted as an "on-trajectory approximations" of longer horizon objectives as we will discuss in Section 4. Although successful in some learning scenarios, we argue that both these update rules suffer from (different) pathological behaviors, as HD may be "shortsighted", being prone to underestimate the learning rate (as noted by), while RTHO may be too slow to adapt to sudden changes of the loss surface or, worse, may be unstable, with updates growing exponentially in magnitude. We exemplify these behaviors in Fig. 1, using two bidimensional test functions 5 from the optimization literature, where we set E = L and we perform 500 steps of gradient descent, from a fixed initial point. The Beale function, on the left, presents sharp peaks and large plateaus. RTHO consistently outperforms HD for all probed values of β that do not lead to divergence (Fig. 3 upper center). This can be easily explained by the fact that in flat regions gradients are small in magnitude, leading to ∆ HD η t to be small as well. RTHO, on the other hand, by accumulating all available partial hypergradients and exploiting second order information, is capable of making faster progress. We use a simplified and smoothed version of the Bukin function N. 6 to show the opposite scenario (Fig. 3 lower center and right). Once the optimization trajectory closes the valley of minimizers y = 0.01x, RTHO fails to discount outdated information, bringing the learning rate first to grow exponentially, and then to suddenly vanish to 0, as the gradient changes direction. HD, on the other hand, correctly damps η and is able to maintain the trajectory close to the valley. These considerations suggest that neither ∆ RTHO nor ∆ HD provide globally useful update directions, as large plateaus and sudden changes on the loss surface are common features of the optimization landscape of neural networks . Our proposed algorithm smoothly and adaptively interpolates between these two methods, as we will present next. In this section, we develop and motivate MARTHE, an algorithm for computing LR schedule online during a single training run. This method maintains an adaptive moving-average over approximations of Eq. of increasingly longer horizon, using the past trajectory and gradients to retain a low computational overhead. Further, we show that RTHO and HD outlined above, can be interpreted as special cases of MARTHE, shedding further light on their behaviour and shortcomings. 4 In the authors the hyperparameter is updated every K iterations. Here we focus on the case K = 1 which better allows for a unifying treatment. HD is developed using as objective the training loss L rather than the validation loss E. We consider here without loss of generality the case of optimizing E. 5 We use the Beale function defined as L(x, y) = (and a simplified smoothed version of Buking N. 6: L(x, y) = ((y − 0.01x) 2 + ε) 1/2 + ε, with ε > 0. Shorter horizon auxiliary objectives. The g K s define a class of shorter horizon objective functions, indexed by K, which correspond to the evaluation of E after K steps of optimization, starting from u ∈ R d and using ξ as the LR schedule 6. Now, the derivative of g K w.r.t. ξ 0, denoted g K, is given by where the last equality holds for SGD dynamics. Once computed on subsets of the original optimization dynamics (w i) T i=0, the derivative reduces for K = 1 to g 1 (w t, η t) = −∇E(w t+1)∇L(w t) (for SGD dynamics), and for Intermediate values of K yield cheaper, shorter horizon approximations of. Approximating the future trajectory with the past. Explicitly using any of the approximations given by g K (w t, η) as ∆η t is, however, still largely impractical, especially for K 1. Indeed, it would be necessary to iterate the map Φ for K steps in the future, with the ing (w t+i) iterations discarded after a single update of the learning rate. For K ∈ [t], we may then consider evaluating g K exactly K steps in the past, that is evaluating HD, which is computationally inexpensive. However, when past iterates are close to future ones (such as in the case of large plateaus), using larger K's would allow in principle to capture longer horizon dependencies present in the hypergradient structure of Eq. 4. Unfortunately the computational efficiency of K = 1 does not generalize to K > 1, since setting ∆η t = g K would require maintaining K different tangent systems. Discounted accumulation of g k s. The definition of the g K s, however, allows one to highlight the recursive nature of the accumulation of g K. Indeed, by maintaining the vector tangent system requires only updating and recomputing the gradient of E for a total cost of O(c(Φ)) per step both in time and memory using fast Jacobians vector products where c(Φ) is the cost of computing the optimization dynamics (typically c(Φ) = O(d)). The parameter µ ∈ allows to control how quickly past history is forgotten. One can notice that ∆ RTHO η t = S t,1 (w 0, (η j) t−1 i=0 ), while µ = 0 recovers ∆ HD η t. Values of µ < 1 help discounting outdated information, while as µ increases so does the horizon of the hypergradient approximations. The computational scheme of Eq. 9 is quite similar to that of forward-mode algorithmic differentiation for computingẇ (see Section 2 and Eq. 3); we note, however, that the "tangent system" in Eq. 9, exploiting the sparsity of the matrices B t, only keeps track of the variations w.r.t the first component ξ 0, drastically reducing the running time. Adapting µ and β online. We may set ∆η t = S t,µ (w 0, (η j) t−1 i=0 ). This still would require choosing a fixed value of µ, which should be validated on a separate set of held-out data. This may add an undesirable overhead on the optimization procedure. Furthermore, as discussed in Section 3, different regions of the loss surface may benefit from different effective approximation horizons. To address these issues, we propose to compute µ online. Ideally, we would like to verify that ∆η t [∇f (η)] t > 0, i.e. whether the proposed update is a descent direction w.r.t. the true hypergradient. While this is unfeasible (since ∇f (η) is unavailable), we can cheaply compute, after the update w t+1 = Φ(w t, η t), the quantity which, ex post, relates µ t to the one-step descent condition for g 1. We set µ t+1 = h µ (q(µ t)) where h µ is a monotone scalar function with range in (note that if µ t = 0 then Eq. 10 is non-negative). For space limitations, we defer the discussion of the choice of h µ and the effect of adapting online the approximation horizons to the Appendix. We can finally define the update rule for MARTHE as We further propose to adapt β online, implementing with this work a suggestion from. We regard the LR schedule as a function of β and apply the same reasoning done for η, keeping µ = 0, to avoid maintaining an additional tangent system which would involve third order derivatives of the training loss L. We then set β t+1 = β t − β∆η t+1 · ∆η t, where, with a little notation override, β becomes a fixed step-size for adapting the hyper-learning rate. This may seem a useless trade-off at first; yet, as we observed experimentally, one major advantage of lifting the adaptive dynamics by one more level is that it injects additional stability in the learning system, in the sense that good values of this last parameter of MARTHE lays in a much broader range range than those of good hyper-learning rates. In fact, we observe that when β is too high, the dynamics diverges within the first few optimization steps; whereas, when it does not, the final performances are rather stable. Algorithm 1 presents the pseudocode of MARTHE. The runtime and memory requirements of the algorithm are dominated by the computation of the variables Z. Being these structurally identical to the tangent propagation of forward mode algorithmic differentiation, we conclude that the runtime complexity is up to four times that of the underlying optimization dynamics Φ and the memory requirement up to two times (see , Sec. 4). We suggest the default values of β 0 = 0 and η 0 = 0 when no prior knowledge of the task at hand is available. Finally, we suggest to wrap Algorithm 1 in a selection procedure for β, where one may start with a high enough β and aggressively diminish it (e.g. decimating it) until the learning system does not diverge. Algorithm 1 MARTHE; requires β, η0, β0 = 0 Initialization of w, η and {Compute µ t, see Eq. 10} β t+1 ← β t − β∆η t+1 · ∆η t {Update hyper-LR} end for In this section, we empirically compare the optimized LR schedules found by approximately solving Problem 1 by gradient descent (denoted LRS-OPT), where the hypergradient is given by Eq. 4, against HD, RTHO & MARTHE schedules where for MARTHE we consider both the adaptation schemes for µ and β presented in the previous section as well as fixed hyper-learning rate and discount factor µ. We are interested in understanding and visualizing the qualitative similarities among the schedules, as well as the effect of µ and β and the adaptation strategies on the final performance measure. To this end, we trained feedforward neural networks, with three layers of 500 hidden units each, on a subset of 7000 MNIST images. We used the cross-entropy loss and SGD as the optimization dynamics Φ, with a mini-batch size of 100. We further sampled 700 images to form the validation set and defined E to be the validation loss after T = 512 optimization steps (about 7 epochs). For LRS-OPT, we randomly generated different mini-batches at each iteration to prevent the schedule from unnaturally adapting to a specific sample progression 7. We initialized η = 0.01 · 1 512 for LRS-OPT and set η 0 = 0.01 for all adaptive methods, and repeated the experiments for 20 random seeds (except LRS-OPT, repeated only for 4 seeds). Results are visualized in Figure 2. Figure 2 (left) shows the LRS-OPT schedules found after 5000 iterations of gradient descent: the plot reveals a strong initialization (random seed) specific behavior of η * for approximately the first 100 steps. The LR schedule then stabilizes or slowly decreases up until around 50 iterations before the final time, at which point it quickly decreases (recall that with LRS-OPT all η i, including η 0, are optimized "independently" and may take any permitted value). Figure 2 (center) present a qualitative comparison between the offline LRS-OPT schedule and the online ones. HD generates schedules that quickly decay to very small values, while RTHO schedule linger or fail to decrease, possibly causing instability and divergence in certain cases. Fixing µ = 0.99 seems to produce schedules that Center: qualitative comparison between offline and online schedules for one random seed. For MARTHE with fixed µ, we report the best performing one. For each method, we report the schedule generated with the value of β that achieves the best average final validation accuracy. Plots for the remaining random seeds can be found in the appendix. Right: Average validation accuracy of over 20 random seeds, for various values of β. When no point is reported it means that the achieved average accuracy for that configuration falls below 88% (or diverged). For reference, the average validation accuracy of the network trained with η = 0.01 · 1512 is 87.5%, while LRS-OPT attains 96.2%. remarkably mimic the the optimized one; yet, unfortunately, this happens only for a small range of values of µ which we expect to be task dependent. Using both the adaptation schemes for µ and β (curve named MARTHE in the plots), allows to reliably find highly non-trivial schedules that capture the general behavior of the optimized one (additional plots in the Appendix). Figure 2 (right) shows the average validation accuracy over 20 runs (rather than loss, for easier interpretation) of the online methods, varying β and discarding values below 88% of validation accuracy. In particular, fixing µ > 0 seems to have a beneficial impact for all tried values of the hyper-learning rate β. Using only the heuristic for adapting µ online (blue line, named h µ in the plot) further helps, but is somewhat sensitive to the choice of β. Using both the adaptive mechanisms, beside improving the final validation accuracy, seems to drastically lower the sensitivity on the choice of this parameter, provided that the learning system does not diverge. Finally, we note that even with this very simple setup, a single run of LRS-OPT (which comprises 5000 optimization steps) takes more than 2 hours on an M-40 NVIDIA GPU. In contrast, all adaptive methods requires less than a minute to conclude (HD being even faster). We run experiments with an extensive set of learning rate scheduling techniques. Specifically, we compare MARTHE against the following fixed LR scheduling strategies: (i) exponential decay (ED) -where the LR schedule is defined by η t = η 1 γ t (ii) staircase decay and (iii) stochastic gradient descent with restarts (SGDR) by. Moreover, we compare against online strategies such as HD and RTHO. For all the experiments, we used a single Volta V100 GPU (AWS P3.2XL). We fix the batch-size at 128 samples for all the methods, and terminate the training procedure after a fixed number of epochs. We set L as the cross entropy loss with weight-decay regularization (with factor of 5 · 10 −4) and set E as the unregularized cross entropy loss on validation data. All the experiments with SGDM have an initial learning rate (η 0) of 0.1 and for Adam, we set it to 3 · 10 −4. For staircase, we decay the learning rate by 90% after every 60 epochs. For exponential decay, we fix a decay factor of 0.99 per epoch, and for SGDR we use T 0 = 10 and T mult = 2. For HD and RTHO, we set the β as 10 −6 and 10 −8 respectively. Momentum for SGDM is kept constant to 0.9. For Adam we used the standard values for the remaining configuration parameters. We run all the experiments with 5 different seeds reporting average and standard deviation, recording accuracy, loss value and generated learning rate schedules. We trained image classification models on two different datasets commonly used to benchmark optimization algorithms for deep neural networks: CIFAR-10 where we trained a VGG-11 network with BatchNorm using SGDM as the inner optimizer, and CIFAR-100 where we trained a ResNet-18 using Adam (. The source code in PyTorch and TensorFlow to reproduce the experiments will be made publicly available. In Figures 3 and 4, we report the of our experiments for CIFAR-10 with VGG-11 and CIFAR-100 with ResNet-18 respectively. For both figures, we report from left to right: accuracy in percentage, validation loss value, and an example of a generated learning rate schedule. MARTHE produces LR schedules that lead to trained models with very competitive final validation accuracy in both the experimental settings, virtually requiring no tuning. For setting the hyper-learning rate step-size of MARTHE we followed the simple procedure outlined at the end of Section 4, while for the other methods we performed a grid search to select the best value of the respective algorithmic parameters. On CIFAR-10, MARTHE obtains a best average accuracy of 92.79% statistically on par with SGDR (92.54%), while clearly outperforming the other two adaptive algorithms. On CIFAR-100, MARTHE leads to faster convergence during then whole training compared to all the other methods, reaching an accuracy of 76.68%, comparable to staircase schedule with 76.40%. We were not able to achieve competitive with SGDR in this setting, despite trying several values of the two main configuration parameters within the suggested range. Further, MARTHE produces aggressive schedules (see Figure 3 and 4 right, for an example) that increase the LR at the beginning of the training, sharply decreasing it after a few epochs. We observe empirically that this leads to improved convergence speed and competitive final accuracy. Additional experimental validation is reported in Appendix D and includes for MARTHE with an initial learning rate η 0 = 0. Finding a good learning rate schedule is an old but crucially important issue in machine learning. This paper makes a step forward, proposing an automatic method to obtain performing LR schedules that uses an adaptive moving average over increasingly long hypergradient approximations. MARTHE interpolates between HD and RTHO taking the best of the two worlds. The implementation of our algorithm is fairly simple within modern automatic differentiation and deep learning environments, adding only a moderate computational overhead over the underlying optimizer complexity. In this work, we studied the case of optimizing the learning rate schedules for image classification tasks; we note, however, that MARTHE is a general technique for finding online hyperparameter schedules (albeit it scales linearly with the number of hyperparameters), possibly implementing a competitive alternative in other application scenarios, such as tuning regularization parameters . We plan to further validate the method both in other learning domains for adapting the LR and also to automatically tune other crucial hyperparameters. We believe that another interesting future research direction could be to learn the adaptive rules for µ and β in a meta learning fashion. Please refer to Table 1 for a summary and description of the notation used throughout the paper. Validation error Total derivative of w w.r.t. η, variable of the tangent system Jacobian of the dynamics w.r.t. the weights SGD: Jacobian of the dynamics w.r.t. the LRS SGD: Product of Jacobians from iteration s to r ∆η t Update for the learning rate at iteration t β ∈ R Variables of the vector tangent system Cf. tangent matrixẇ Heuristics for dampening factor µ See Appendix B B CHOICE OF HEURISTIC FOR ADAPTING µ We introduced in Section 4 a method to compute online the dampening factor µ t based on the quantity We recall that if q(µ t) is positive then the update ∆η t+1 is a descent direction for the one step approximation of the objective f T. We describe here the precise heuristic rule that we use in our experiments. Callq(µ t) = max(min(q(µ t) g 1 (w t, η t) −2, 1), 0) ∈ the normalized, thresholded q(µ t). We propose to set where c t acts as a multiplicative counter, measuring the effective approximation horizon. The ing heuristics is independent on the initialization of µ since Z 0 = 0. We note that whenever µ t is set to 0, the previous hypergradient history is forgotten. Applying this heuristic to the optimization of the two test functions of Sec. 3 reveals to be successful: for the Beale function, h µ selects µ t = 1 for all t, while for the smoothed Bukin, it selects µ t = 0 for around 40% of the iterations, bringing down the minimum optimality gap at 10 −6 for β = 0.0005. We conducted exploratory experiments with variants of h µ which include thresholding between -1 and 1 and "penalizing" updates larger than g 1 without observing statistically significant differences. We also verified that randomly setting µ t to 0 does not implement a successful heuristics, while introducing another undesirable configuration parameter. We believe, however that there is further space of improvement for h µ (and possibly to adapt the hyper-learning rate), since g 1 does not necessarily capture the long term dependencies of Problem 1. Meta-learning these update rules could be an interesting direction that we leave to future investigation. We show in Figure 5 the LR schedules for the experiments described in Section 5 for the remaining random seeds. The random seed controls the different initial points w 0, which is the same for all online methods and for LRS-OPT, and determines the mini-batch progression for the online methods (while for LRS-OPT the mini-batch progression is randomized at each outer iteration). We report in this section additional experimental to complement the analysis of Section 6. Figure 6 shows accuracy, validation loss and learning rates schedules for CIFAR-100 dataset with a ResNet-18 model and SGDM as (inner) optimization methods. In accordance with previous experimental validation we set the initial learning rate η 0 to 0.1 for all methods. We include also relevant statistics for MARTHE 0, that is MARTHE obtained by letting the schedule start at 0 (cyan lines in the plots). Setting η 0 = 0 is clearly not an optimal choice, nevertheless MARTHE is able to obtain competitive also in this disadvantaged scenario, producing schedules that quickly reach high values and then sharply decrease within the first 40 epochs. Figure 6 (left) reports a sample of generated schedule. It is important to highlight how the heuristic methods, such as exponential decay, are not able to handle η 0 = 0. In fact, for non-adaptive methods, η 0 is indeed another configuration parameter that must be tuned. MARTHE 0, on the other hand, constitute a virtually parameterless method (β can be quickly found with the strategy outlined at the end of Section 4) that can be employed in situations where we have no prior knowledge of the task at hand. Conversely as noted by and , too high (initial) learning rates are not well suited for gradient-based adaptive strategies: instability of the inner optimization dynamics indeed propagates to the hypergradient computation, possibly leading to "exploding" hypergradients. Finally, we tried another configuration of parameters for SGDR, in order to make the last restart completing the training exactly after 200 epochs. We selected T 0 = 10 and T mul = 2.264 (i.e. restarts at 10, 33, 84, 200). Figure 7 and Figure 8 report the same experiments of Section 6 with this slightly different baseline. We note that the performance of SGDR remains similar. E SENSITIVITY ANALYSIS OF INADAPTIVE MARTHE WITH RESPECT TO η 0, µ AND β In this section, we study the impact of η 0, µ and β for MARTHE, when our proposed online adaptive methodologies for µ and β are not applied. We think that the sensitivity of the methods is very important for the HPO algorithms to work well in practice, especially when they depend on the choice of some (new) hyperparameters such as µ and β. We show the sensitivity of inadaptive MARTHE with respect to η 0 and µ, fixing β. We used VGG-11 on CIFAR-10 with SGDM as optimizer, but similar can be obtained in the other cases. Figure 9 shows the obtained test accuracy fixing β to 10 −7 (Left) and 10 −8 (Right). The plots show a certain degree of sensitivity, especially with respect to the choice of a (fixed) µ. This suggest that implementing adaptive strategies to compute online the dampening factor µ and the hyper-learning rate β constitute an essential factor to achieve the competitive reported in Section 6 and Appendix D. Figure 9: Sensitivity analysis of inadaptive MARTHE with respect to η 0 and µ fixing the value of β to 10 −7 (Left) and 10 −8 (Right). We used VGG-11 on CIFAR-10 with SGDM as optimizer. Darker colors mean lower final accuracy. | MARTHE: a new method to fit task-specific learning rate schedules from the perspective of hyperparameter optimization | 678 | scitldr |
Recent years have witnessed some exciting developments in the domain of generating images from scene-based text descriptions. These approaches have primarily focused on generating images from a static text description and are limited to generating images in a single pass. They are unable to generate an image interactively based on an incrementally additive text description (something that is more intuitive and similar to the way we describe an image). We propose a method to generate an image incrementally based on a sequence of graphs of scene descriptions (scene-graphs). We propose a recurrent network architecture that preserves the image content generated in previous steps and modifies the cumulative image as per the newly provided scene information. Our model utilizes Graph Convolutional Networks (GCN) to cater to variable-sized scene graphs along with Generative Adversarial image translation networks to generate realistic multi-object images without needing any intermediate supervision during training. We experiment with Coco-Stuff dataset which has multi-object images along with annotations describing the visual scene and show that our model significantly outperforms other approaches on the same dataset in generating visually consistent images for incrementally growing scene graphs. To truly understand the visual world, our models should be able to not only recognize images but also generate them. Generative Adversarial Networks, proposed by BID3 have proven immensely useful in generating real world images. GANs are composed of a generator and a discriminator that are trained with competing goals. The generator is trained to generate samples towards the true data distribution to fool the discriminator, while the discriminator is optimized to distinguish between real samples from the true data distribution and fake samples produced by the generator. The next step in this area is to generate customized images and videos in response to the individual tastes of a user. A grounding of language semantics in the context of visual modality has wide-reaching impacts in the fields of Robotics, AI, Design and image retrieval. To this end, there has been exciting recent progress on generating images from natural language descriptions. Conditioned on given text descriptions, conditional-GANs BID11 are able to generate images that are highly related to the text meanings. Samples generated by existing text-to-image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. Leading methods for generating images from sentences struggle with complex sentences containing many objects. A recent development in this field has been to represent the information conveyed by a complex sentence more explicitly as a scene graph of objects and their relationships BID7. Scene graphs are a powerful structured representation for both images and language; they have been used for semantic image retrieval BID6 and for evaluating BID0 and improving BID9 image captioning. In our work, we propose to leverage these scene graphs by incrementally expanding them into more complex structures and generating corresponding images. Most of the current approaches lack the ability to generate images incrementally in multiple steps while preserving the contents of the image generated so far. We overcome this shortcoming by conditioning the image generation process over the cumulative image generated over the previous steps and over the unseen parts of the scene graph. This allows our approach to generate high quality complex real-world scenes with several objects by distributing the image generation over multiple steps without losing the context. Recently, BID2 proposed an approach for incremental image generation but their method is limited to synthetic images due to the need of supervision in the intermediate step. Our approach circumvents the need for intermediate supervision by enforcing perceptual regularization and is therefore compatible with training for even real world images (as we show later).A visualization of our framework's outputs with a progressively growing scene graph can be seen in Figure 1. We can see how at each step new objects get inserted into the image generated so far without losing the context. To summarize, we make the following contributions,• We present a framework to generate images from structured scene graphs that allows the images to be interactively modified, while preserving the context and contents of the image generated over previous steps.• Our method does not need any kind of intermediate supervision and hence, is not limited to synthetic images (where you can manually generate ground truth intermediate images). It is therefore useful for generating real-world images (such as for MS-COCO) which, to the best of our knowledge, is the first attempt of its kind. Generating images from text descriptions is of great interest, both from a computer vision perspective, and a broader artificial intelligence perspective. Since the advent of Generative Adversarial Networks (GANs), there have been many efforts in this direction BID5. BID10 proposed a framework based on an LSTM and a conditional GAN to incrementally generate an image using a sentence. The words in the sentence were encoded using word2vec, and passed through an LSTM. A skip-though vector representing the semantic meaning of an entire sentence is used as the conditioning for the GAN. However, all of these works mostly focus on generating images with single objects (such as faces or flowers or birds). Even within these objects, the avenues of variance is quite limited. Generating more complex scenes with multiple objects and specific relationships between those objects is an even harder research problem. BID15 proposed an architecture based on multiple GANs stacked together, generating images in a coarse-to-fine manner. They later also proposed arranging the generators in a tree-like structure for improved BID16. More recently, BID4 proposed an end-to-end pipeline for inferring scene structure and generating images based on text descriptions. A similar approach was taken by BID13, where they used attention-based object and attribute decoders to infer bounding box locations of objects in the scene. However, for images with several objects such as in COCO-Stuff, the captions are often not descriptive enough to capture all the objects. Furthermore, the captions don't describe the relations between the objects in the image effectively. AttnGANs also begin with a low-resolution image, and then improve it over multiple steps to come up with a final image. However, there's no mechanism to capture consistency during incremental image generation. A more detailed failure case analysis is done in Section 4.Most recently, BID7 proposed to use scene-graphs as a convenient intermediate for image synthesis. Scene-graphs provide an efficient and interpretable representation of the objects in an image and their relationships. The input scene graph is processed with a graph convolution network which passes information along edges to compute embedding vectors for all objects. These vectors are used to predict bounding boxes and segmentation masks for all objects, which are combined to form a coarse scene layout. The layout is passed to a cascaded refinement network which generates an output image at increasing spatial scales. The model is trained adversarially against a pair of discriminator networks which ensure that output images look realistic. However, this model does not account for object saliency or temporal consistency in the generated images. It also fails for highly complex scene where there are several objects due to a single-pass image generation. A more detailed failure case analysis is done in Section 4.Another recent work by introduces a conditional text-to-image based generation approach to generate images iteratively, keeping track of ongoing context and history. Their method uses GRU to process text instructions into embeddings which go through conditioning augmentation and are fed into a GAN to generate contents on a canvas. They use a convex combination of feature maps to ensure consistency. Due to the availability of structured information in the form of scene graphs, we can instead filter out the latent embeddings of the already generated objects, thus allowing for better textural consistency. Moreover, their approach suffers from the drawback of needing supervision at every stage of generation for training and is therefore limited to only synthetic scenarios (such as CoDraw and i-CLEVR where intermediate images can be easily rendered). On the contrary, our approach employs perceptual similarity based regularization and effective use of graph-based embeddings to circumvent the need of ground truth for intermediate steps, making it compatible with even real-world images. As in most modern conditional image-generation formulations, we follow a generative-adversarial approach to image generation. Here the adversarial network penalizes the network based on how realistic the generated images are, as well as whether the required objects are present in them. Furthermore, a key part of the task is to preserve the relations between objects as specified in the scene graph in the generated image as well. For our baseline approach for generating images from scene graphs, we adopt the architecture proposed by BID7. The architecture consists of 3 main modules, a Graph Convolution network (GCN), Layout Prediction Network (LN) and a Cascade Refinement Network (CRN), which we describe in more detail below. For brevity, we omit an exhaustive description of these modules and request the reader to refer to the original paper for further details. Figure 2 provides an overview of the full model architecture. Graph Convolution Network. The Graph Convolution Network (GCN) is composed of several graph convolution layers, and can operate natively on graphs. GCN takes an input graph and computes new vectors for each node and edge. Each graph convolution layer propagates information along edges of the graph. The same function is applied to all graph edges, which ensures that a single convolution layer can work with arbitrary shaped graphs. Layout Prediction Network. The GCN outputs an embedding vector for each object. These object embedding vectors are used by the layout prediction network to compute a scene layout by predicting a segmentation mask and bounding box for each object. Mask regression network and a box regression network are used to predict a soft binary mask and a bounding box, respectively. The layout prediction network hence acts as an intermediary between the graph and image domains. Cascade Refinement Network. Given a scene layout, the Cascade Refinement Network (CRN) is responsible for generating an image which respects the object positions in the scene layout. The CRN consists of a series of convolutional refinement modules. The spatial resolution doubles between Figure 2: Model architecture for incremental image generation using scene graphs. The Graph Convolution Network takes as input the scene graph and produces object embeddings which are then fed to the Scene Layout Network. The SLN generates a layout by predicting the bounding boxes and segmentation masks for the objects. These are finally sent to the Cascaded Refinement Network which generates the image corresponding to the graph. This process is called iteratively to add objects in the image.modules; which ensures that image generation is happening in a coarse-to-fine manner. The scene layout is first downsampled to the module input resolution and then fed to the module along with the previous module's output. Both these inputs are concatenated channel-wise and then passed to a pair of 3 × 3 convolution layers. This output is upsampled using nearest-neighbor interpolation and then passed to the next module. The output from the last module is finally passed to 2 convolution layers to produce the output image. Our method allows for preserving context across the sequentially generated images by conditioning subsequent steps of image generation over certain information from previous steps.• We extend BID7 with a recurrent architecture that generates images using incrementally growing scene graphs using the components discussed in previous section as shown in Figure 2.• To ensure that the image generated in the current step preserves the visual context from the previous steps, we replace three channels of the noise passed to CRN with the RGB channels of the image generated in the previous step. This encourages the CRN to generate the new image as similar as possible to the previously generated image.• Moreover, we want the SLN to generate a layout corresponding to only the newly added objects in the scene graph. To this end, we remove the representations generated by GCN corresponding to the objects generated in previous steps.• We do not have any ground truth for the intermediate generated images so we use perceptual loss for images generated in the intermediate steps to enforce the images to be perceptually similar to the ground truth final image. We do have L1 loss between the final image generated and the ground truth. We use an image level and an object level discriminator to ensure realism of the images and presence of the objects. These are trained as in the regular GAN formulation: DISPLAYFORM0 2. Box loss: Penalizes L1 distance between ground truth boxes from MS COCO vs the predicted labels as L box = n i ||b − b || 3. Mask loss: Penalizes difference between the masks predicted vs the ground truth masks, using cross entropy loss.4. L1 pixel loss: Penalizes the difference between the ground truth image from MS COCO and the final generated image at the end of the incremental generation. L1 pixel losses are also used to penalize the difference between the previous and current generated image. DISPLAYFORM1 5. Perceptual Similarity Loss: Serves as a regularization to ensure that the images generated at different steps are contextually similar to each other. Since we do not have ground truth for intermediate steps, we introduce an additional perceptual similarity loss using between the final ground truth image and the different images generated in the intermediate steps. This allows the model to'hallucinate' for the intermediate steps in a way that the contents of the image are similar to the ground truth. DISPLAYFORM2 where P φ is the function computing a latent embedding that captures the perceptual characteristics of an image. Thus we can provide additional supervision on the coordinates of the bounding boxes predicted by the layout network, to explicitly ensure the relations are preserved. We perform experiments on the 2017 COCO-Stuff dataset BID1 which augments a subset of the COCO dataset BID8 with additional stuff categories. The dataset annotates 40K train and 5K validation images with bounding boxes and segmentation masks for 80'thing' categories (people, cars, etc.) and 91'stuff' categories (sky, grass, etc.). We follow the procedure described in BID7 to construct synthetic scene graphs from these annotations based on the 2D image coordinates of the objects, using six mutually exclusive geometric relationships: left of, right of, above, below, inside, and surrounding. We create three splits for each image based on the number of objects in it. We randomly select 50% of the objects for the first split and incrementally add 25% objects for the next two splits. We then synthetically create separate scene graphs for each split. We train incremental generation for three steps, but this can be easily extended to more number of steps. Note that previous works like BID2 have relied on entirely synthetic datasets. For synthetic datasets, intermediate ground truth for all interim stages of the input text can easily be generated. However, for interactive image generation to be truly useful, it needs to generate realistic looking images not constrained to a synthetic dataset's subspace. The challenge with using real datasets like COCO is that we do not have "ground-truth" images corresponding to interim scene graphs (i.e., where all objects of the training image are not present). Only at the final step, with the full scene graph, do we get supervision from the training image. This makes our loss formulation described in the previous section particularly important. To enable comparison against BID7, we follow their dataset preprocessing steps. They ignore objects covering less than 2% of the image, and use images with 3 to 8 objects. They divide the COCO-Stuff 2017 validation set into their own validation and test sets, which contain 24,972 train, 1024 validation, and 2048 test images. For fair comparison, we do the same. captures the scene graph's semantic context, it lacks consistency over multiple passes and the generated images are of poor quality. AttnGAN on the other hand generates higher resolution images but doesn't capture semantic context from the scene graph. Our model addresses both of these shortcomings by incrementally adding objects in accordance with the scene graph and generating higher quality images. We compare the performance of our model against two baselines BID7 and BID14 as can be seen in FIG0. The scene graph in the first step contains two objects and the relationships between them. An additional object and its relationship with other objects is added to the scene graph at each of the next two steps. Note that the two baselines do not generate incrementally but for fair comparison, we generate outputs from them by feeding different amounts of information (scene graph or text) in three independent forward passes. As can be seen, the first baseline BID7 is able to capture the semantic context provided by the graph. However (i) it fails to preserve consistency over multiple passes and generates a completely new image for each scene graph, agnostic of what it had generated at the previous step and Figure 4: Sample outputs of our pipeline, visualized over 3 steps of generation. 3 splits for each image are created, based on the number of objects in that image. Synthetic scene graphs are then generated for each split and used for incremental image generation. While three steps are used for training, this method is quite easily extensible for more number of steps as well.(ii) the images generate are of poor quality. The second baseline BID14 produces visually pleasing and high resolution images but completely fails to capture any semantic context provided from the graph. Our model, on the other hand, is capable of incrementally adding new objects to the image created in the previous step in accordance with the relationships defined in the scene graph. Additionally, the quality of generated images is significantly improved, since at each step the model has to generate only a few objects rather than generating cluttered scenes with multiple objects, hence enabling it to better generate scene semantics. We present comprehensive generated by our model in Figure 4. It can be seen in Figures 4(c,f,h) that the model starts by generating objects as described by the scene graph and outputs noise in the when it does not have enough information provided as input. As the information is added in the scene graph, objects like grass and sky begin to materialize. However it can also be seen that sometimes due to inherent biases in the dataset, the model begins to hallucinate the even when no information is provided explicitly. In Figure 4 (a), even though there is no mention of rock or water in the initial graph, the model is hallucinating objects of similar shapes at similar locations. We use Inception Score BID12 for evaluating the quality of the images generated from our models. Inception Score uses an ImageNet based classifier to provide a quantitative evaluation of how realistic generated images appear. Inception Scores were originally proposed with the two main goal. Firstly, the images generated should contain clear objects (i.e. the images are sharp rather than blurry), (or, for image x and label y, p(y|x) should be low entropy). Secondly, the generative algorithm should output a high diversity of images from all the different classes in ImageNet, or p(y) should be high entropy. Ground Truth BID7 Step 1 (Ours) Step 2 (Ours) Step 3 (Ours) 6.13 3.05 3.68 5.02 4.14 Table 1: Inception Scores for Ground Truth Images, images generated from Sg2im and the three steps from our modelThe inception scores are reported in Table 1. We compare the inception score of the images generated from the baseline model sg2im with the full scene graph of the ground truth images. For our sequential generation model, we report the scores over three steps of generation, where at the third step the scene graph is the full scene graph corresponding to the ground truth image. We observe that due to our modified loss formulation and incremental generation, our model performs better in inception scores in all three steps. Next, we evaluate how well our model retains the generated image from the previous step when adding in information to the image in the subsequent step (i.e., how "visually consistent" the generated images are). We use the perceptual similarity loss proposed by, and report the mean losses in Table 2. As seen both in the table and qualitatively in FIG0, our model is much more successful at preserving context and previously generated content over subsequent steps. Step 1 to Step 2 Step 2 to Step 3 Table 2: Mean Perceptual similarity loss, evaluated between the images generated images of sg2im and our proposed model. Lower is better. In this paper, we proposed an approach to sequentially generate images using incrementally growing scene graphs with context preservation. Through extensive evaluation and qualitative , we demonstrate that our approach is indeed able to generate an image sequence that is consistent over time and preserves the context in terms of objects generated in previous steps. In future, we plan to explore generating end-to-end with text description by augmenting our methodology with module to generate scene graphs from language input. While scene-graphs provide a very convenient modality to capture image semantics, we would like to explore ways to take natural sentences as inputs to modify the underlying scene graph. The current baseline method does single shot generation by passing the entire layout map through the Cascade Refinement Net for the final image generation. We plan to investigate whether the quality of generation can be improved by instead using attention on the GCN embeddings during generation. This could also potentially make the task of only modifying certain regions in the image easier. Further, we plan to explore better architectures for image generation through layouts for higher resolution image generation. | Interactively generating image from incrementally growing scene graphs in multiple steps using GANs while preserving the contents of image generated in previous steps | 679 | scitldr |
In some important computer vision domains, such as medical or hyperspectral imaging, we care about the classification of tiny objects in large images. However, most Convolutional Neural Networks (CNNs) for image classification were developed using biased datasets that contain large objects, in mostly central image positions. To assess whether classical CNN architectures work well for tiny object classification we build a comprehensive testbed containing two datasets: one derived from MNIST digits and one from histopathology images. This testbed allows controlled experiments to stress-test CNN architectures with a broad spectrum of signal-to-noise ratios. Our observations indicate that: There exists a limit to signal-to-noise below which CNNs fail to generalize and that this limit is affected by dataset size - more data leading to better performances; however, the amount of training data required for the model to generalize scales rapidly with the inverse of the object-to-image ratio in general, higher capacity models exhibit better generalization; when knowing the approximate object sizes, adapting receptive field is beneficial; and for very small signal-to-noise ratio the choice of global pooling operation affects optimization, whereas for relatively large signal-to-noise values, all tested global pooling operations exhibit similar performance. Convolutional Neural Networks (CNNs) are the current state-of-the-art approach for image classification (; ; ;). The goal of image classification is to assign an image-level label to an image. Typically, it is assumed that an object (or concept) that correlates with the label is clearly visible and occupies a significant portion of the image; ). Yet, in a variety of real-life applications, such as medical image or hyperspectral image analysis, only a small portion of the input correlates with the label, ing in low signal-to-noise ratio. We define this input image signal-to-noise ratio as Object to Image (O2I) ratio. The O2I ratio range for three real-life datasets is depicted in Figure 1. As can be seen, there exists a distribution shift between standard classification benchmarks and domain specific datasets. For instance, in the ImageNet dataset objects fill at least 1% of the entire image, while in histopathology slices cancer cells can occupy as little as 10 −6 % of the whole image. Recent works have studied CNNs under different noise scenarios, either by performing random input-to-label experiments (; or by directly working with noisy annotations (; ;). While, it has been shown that large amounts of label-corruption noise hinders the CNNs generalization (;, it has been further demonstrated that CNNs can mitigate this label-corruption noise by increasing the size of training data , tuning the optimizer hyperparameters (Jastrzębski et al., 2017) or weighting input training samples . However, all these works focus on input-to-label corruption and do not consider the case of noiseless input-to-label assignments with low and very low O2I ratios. In this paper, we build a novel testbed allowing us to specifically study the performance of CNNs when applied to tiny object classification and to investigate the interplay between input signal-to-noise ratio and model generalization. We create two synthetic datasets inspired by the children's puzzle book Where's Wally? . The first dataset is derived from MNIST digits and allows us for two medical imaging datasets (CAME-LYON17 and MiniMIAS ) as well as one standard computer vision classification dataset (ImageNet ). The ratio is defined as O2I = Aimage, where A object and A image denote the area of the object and the image, respectively. Together with O2I range, we display examples of images jointly with the object area A object (in red). to produce a relatively large number of datapoints with explicit control of the O2I ratio. The second dataset is extracted from histopathology imaging where we crop images around lesions and obtain small number of datapoints with an approximate control of the O2I ratio. To the best of our knowledge these datasets are the first ones designed to explicitly stress-test the behaviour of the CNNs in the low input image signal-to-noise ratio. We develop a classification framework, based on CNNs, and analyze the effects of different factors affecting the model optimization and generalization. Throughout an empirical evaluation, we make the following observations: -Models can be trained in low O2I regime without using any pixel-level annotations and generalize if we leverage enough training data. However, the amount of training data required for the model to generalize scales rapidly with the inverse of the O2I ratio. When considering datasets with fixed size, we observe an O2I ratio limit in which all tested scenarios fail to exceed random performance. -We empirically observe that higher capacity models show better generalization. We hypothesize that high capacity models learn the input noise structure and, as , achieve satisfactory generalization. -We confirm the importance of model inductive bias -in particular, the model's receptive field size. Our suggest that different pooling operations exhibit similar performance, for larger O2I ratios; however, for very small O2I ratios, the type of pooling operation affects the optimization ease, with max-pooling leading to fastest convergence. The code of our testbed will be publicly available at: https://anonymous.url allowing to reproduce all data and ; we hope this work can serve as a valuable resource facilitating further research into the understudied problem of low signal-to-noise classification scenarios. To study the optimization and generalization properties of CNNs, we build two datasets: one derived from the MNIST dataset and another one produced by cropping large resolution images from the CAMELYON dataset . Each dataset allows to evaluate the behaviour of a CNN-based binary classifier when altering different data-related factors of variation such as dataset size, object size, image resolution and class balance. In this subsection, we describe the data generation process. Inspired by the cluttered MNIST dataset , we introduce a scaled up, large resolution cluttered MNIST dataset, suitable for binary image classification. In this dataset, images are obtained by randomly placing a varying number of MNIST digits on a large resolution image canvas. We keep the original 28 × 28 pixels digit resolution and control the O2I ratio by increasing the resolution of the canvas 1. As , we obtain the following O2I ratios {19.1, 4.8, 1.2, 0.3, and 0.075}% that correspond to the following canvas resolutions 64 × 64, 128 × 128, 256 × 256, 512 × 512, and 1024 × 1024 pixels, respectively. As object of interest, we select digit 3. All positive images contain exactly one instance of the digit 3 randomly placed within the image canvas, while negative instances do not contain any instance. We also include distractors (clutter digits): any MNIST digit image sampled with replacement from a set of labels {0, 1, 2, 4, 5, 6, 7, 8, 9}. We maintain approximately constant clutter density over different O2I ratios. Thus, the following O2I ratios {19.1, 4.8, 1.2, 0.3, and 0.075}% correspond to 2, 5, 25, 100, and 400 clutter objects, respectively. For each value of O2I ratio, we obtain 11276, 1972, 4040 of training, validation and test images 2. Fig. 2 depicts example images for different O2I ratios. We refer the interested reader to the supplementary material for details on image generation process as well as additional dataset visualizations. Histopathology: needle CAMELYON (nCAMELYON). The CAMELYON dataset contains gigapixel hystopathology images with pixel-level lesion annotations from 5 different acquisition sites. We use the pixel-wise annotations to extract crops with controlled O2I ratios. Namely, we generate datasets for O2I ratios in the range of (100−50)%, (50−10)%, (10−1)%, and (1 − 0.1)%, and we crop different image resolutions with the size of 128 × 128, 256 × 256, and 512 × 512 pixels. This in training sets of about 20 − 235 unique lesions per dataset configuration (see supplementary for a detailed list of dataset sizes). More precisely, positive examples are created by taking 50 random crops from every contiguous lesion annotation and rejecting the crop if the O2I ratio does not fall within the desired range. Negative images are taken by randomly cropping healthy images and filtering image crops that mostly contain . We ensure the class balance by sampling an equal amount of positive and negative crops. Once the crops were extracted, no pixel-wise information is used during training. Figure 2 shows examples of extracted images used in the nCAMELYON dataset experiments. We refer to the supplementary for more detail about the data extraction process, the ing dataset sizes and more visualizations. Our classification pipelines follow BagNets backbone, which allows us to explicitly control for the network receptive field size. Figure 3 shows a schematic of our approach. As can be seen, the pipelines are built of three components: topological embedding extractor in Test set accuracy vs. O2I ratio for best models, see text for details. which we can control for embedding receptive field, global pooling operation that converts the topological embedding into a global embedding, and a binary classifier that receives the global embedding and outputs binary classification probabilities. By varying the embedding extractor and the pooling operation, we test a set of 48 different architectures. Topological embedding extractor. The extractor takes as input an image I of size [w img × h img × c img] and outputs a topological embedding E t of shape [w enc × h enc × c enc], where w., h., and c. represent width, height and number of channels. Due to the relatively large image sizes, we train the pipeline with small batch sizes and, thus, we replace BagNet-used BatchNorm operation with Instance Normalization (In the paper, we experiment with four different pooling operations, namely: max, logsumexp, average, and soft attention. In our experiments, we follow the soft attention formulation of . The details about global pooling operations can be found in the supplementary material. In this section, we experimentally test how CNNs' optimization and generalization scale with low and very low O2I ratios. First, we provide details about our experimental setup and then we design experiments to provide empirical evidence to the following questions: Image-level annotations: Is it possible to train classification systems that generalize well in low and very low O2I scenarios? O2I limit vs. dataset size: Is there an O2I ratio limit below which the CNNs will experience generalization difficulties? Does this O2I limit depend on the dataset size? O2I limit vs. model capacity: Do higher capacity models generalize better? Inductive bias -receptive field: Is adjusting receptive field size to match (or exceed) the expected object size beneficial? Global pooling operations: Does the choice of global pooling operation affect model generalization? Finally, we inquire about the optimization ease of the models trained on data with very low O2I ratios. In all our experiments, we used RMSProp with a learning rate of η = 5 · 10 −5 and decayed the learning rate multiplying it by 0.1 at 80, 120 and 160 epochs 3. All models were trained with cross entropy loss for a maximum of 200 epochs. We used an effective batch size of 32. If the batch did not fit into memory we used smaller batches with gradient accumulation. To ensure robustness of our , we run every experiment with six different random seeds and report the mean and standard deviation. Throughout the training we monitored validation accuracy, and reported test set for the model that achieved best validation set performance. (a) mean validation set accuracy heatmap for max pooling operation, and (b) minimum required training set size to achieve the noted validation accuracy. We test training set sizes ∈ {1400, 2819, 5638, 7500, 11276, 22552} and report the minimum amount of training examples that achieve a specific validation performance pooling over different network capacities. In this subsection, we present and discuss the main of our analysis. Unless stated otherwise, the capacity of the ResNet-50 network is about 2.3 · 10 7 parameters. Additional and analysis are presented in the supplementary material. Image-level annotations: For this experiment, we vary the O2I ratio on nMNIST and nCAMELYON to test its influence on the generalization of the network. Figure 4 depicts the for the best configuration according to the validation performance: we use max-pooling and receptive field sizes of 33 × 33 and 9 × 9 pixels for the nMNIST and nCAMELYON datasets, respectively. For the nMNIST dataset, the plot represents the mean over 6 random seeds together with the standard deviation; while for the nCAMELYON dataset we report an average over both the 6 seeds and the crop sizes. We find that the tested CNNs achieve reasonable test set accuracies for the O2I ratios larger than 0.3% for the nMNIST datset and the O2I ratios above 1% for the histopathology dataset. For both datasets, smaller O2I ratios lead to poor or even random test set accuracies. We test the influence of the training set size on model generalization for the nMNIST data, to understand the CNNs' generalization problems for very small O2I ratios. We tested six different dataset sizes 4. Fig. 5a depicts the for max-pooling and a receptive field of 33 × 33 pixels. We observe that larger datasets yield better generalization and this increment is more pronounced for small O2I ratios. For further insights, we plot a heatmap representing the mean validation set 5 for all considered 02Is and training set sizes (Fig. 6a) as well as the minimum number of training examples to achieve a validation accuracy We report only runs that fit the training data. Otherwise we report random accuracy and depict it with a texture on the bars. for (a) the nMNIST dataset and (b) the nCAME-LYON datset. We report only runs that fit the training data. Otherwise we report random accuracy and depict it with a texture on the bars.: nMNIST optimization: (a) number of training epochs needed to fit the 11k training data and (b) the number of successful runs. The textured bars indicate that the model did not fit the training data for all random seeds. of 70% and 85% (Fig. 6b). We observe that in order to achieve good classification generalization the required training set size rapidly increases with the decrease of the O2I ratio. O2I limit vs. capacity: In this experiment, we train networks with different capacities -by uniformly scaling the initial number of filters in convolutional kernels by [6 . We show the CNNs test set performances as a function of the O2I ratio and the network capacity in Figures 5b and 5c for the nMNIST (with 11k training points) and nCAMELYON data, respectively. On nMNIST, we observe a clear trend, where the model test set performance increases with capacity and this boost is larger for smaller O2Is. We hypothesize, that this generalization improvement is due to the model ability to learn-to-ignore the input data noise; with smaller O2I there is more noise to ignore and, thus, higher network capacity is required to solve the task. However, for the nCAMELYON dataset, this trend is not so pronounced and we attribute this to the limited dataset size (more precisely to the small number of unique lesions). These suggest that collecting a very large histopathology dataset might enable training of CNN models using only image level annotations. Inductive bias -receptive field: We report the test accuracy as a function of the O2I ratio and the receptive field size for nMINIST in Figure 7a and for nCAMELYON in Figure 7b. Both plots depict for the global max pooling operation. For nMNIST, we observe that a receptive field that is bigger than the area occupied by one single digit leads to best performances; for example, receptive fields of 33 × 33 and 177 × 177 pixels clearly outperform the smallest tested receptive field of 9 × 9 pixels. However, for the nCAMELYON dataset we observe that the smallest receptive field actually performs best. This suggests that most of the class-relevant information is contained in the texture and that higher receptive fields pick up more spurious correlations, because the capacity of the networks is constant. In this experiment, we compare the performance of four different pooling approaches. We present the relation between test accuracy and pooling function for different O2I ratios with a receptive field of 33 × 33 pixels for nMNIST in Figure 8a and 9 × 9 pixels for nCAMELYON in Figure 8b. On the one hand, for the nMNIST dataset, we observe that for the relatively large O2I ratios, all pooling operations reach similar performance; however, for smaller O2Is we see that max-pooling is the best choice. We hypothesize that the global max pooling operation is best suited to remove nMNIST-type of structured input noise. On the other hand, when using the histopathology dataset, for the smallest O2I mean and soft attention poolings reach best performances; however, these outcomes might be affected by the relatively small nCAMELYON dataset used for training. Optimization: In our large scale nMNIST experiments (when using ≈ 11k datapoints), we observed that some configurations have problems fitting the training data 7. In some runs, after significant efforts put into CNNs hyperparamenter selection, the training accuracy was close to random. To investigate this issue further, we followed the setup of randomized experiments from (; and we substituted the nMNIST datapoints with samples from an isotropic Gaussian distribution. On the one hand, we observed that all the tested setups of our pipeline were able to memorize the Gaussian samples, while, on the other hand, most setups were failing to memorize the same-size, nMNIST datataset for small and very small O2I ratios. We argue that the nMNIST structured noise and its compositionality may be a "harder" type of noise for the CNNs than Gaussian isotropic noise. To provide further experimental evidence, we depict average time-to-fit the training data (in epochs) in Fig. 9a as well as number of successful optimizations in Fig. 9b for different O2I ratios and pooling methods 8. We observe that the optimization gets progressively harder with decreasing O2I ratio (with max pooling being the most robust). Moreover, we note that the are consistent across different random seeds, where all runs either succeed or fail to converge. Reasoning about tiny objects is of high interest in many computer vision areas, such as medical imaging (; ; ; ;) and remote sensing . To overcome the low signal-to-noise ratio, most approaches rely on manual dataset "curation" and collect additional pixel-level annotations such as landmark positions , bounding boxes or segmentation maps . This additional annotation allows to transform the original needle-in-a-haystack problem into a less noisy but imbalanced classification problem (; ; Bándi et al., 2019). However, collecting pixel level annotations has a significant cost and might require expert knowledge, and as such, is a bottleneck in the data collection process. Other approaches leverage the fact that task-relevant information is often not uniformly distributed across input data, e.g. by using attention mechanisms to process very high-dimensional inputs (; ; ;). However, those approaches are mainly motivated from a computational perspective trying to reduce the computational footprint at inference time. Some recent research has also studied attention based approaches both in the context of multi-instance learning and histopathology image classification . However, neither of the works report the exact O2I ratio used in the experiments. In this subsection, we briefly highlight the dimensions of optimization and generalization of CNN that are handy in low O2I classification scenarios. Model capacity. For fixed training accuracy, over-parametrized CNNs tend to generalize better . In addition, when properly regularized and given a fixed size dataset, higher capacity models tend to provide better performance . However, finding proper regularization is not trivial . Dataset size. CNN performance improves logarithmically with dataset size . Moreover, in order to fully exploit the data benefit, the model capacity should scale jointly with the dataset size . Model inductive biases. Inductive biases limit the space of possible solutions that a neural network can learn . Incorporating these biases is an effective way to include data (or domain) specific knowledge in the model. Perhaps the most successful inductive bias is the use of convolutions in CNNs. Different CNN architectures (e. g. altering network connectivity) also lead to improved model performance . Additionally, it has been shown on the ImageNet dataset that CNN accuracy scales logarithmically with the size of the receptive field . Although low input image signal-to-noise scenarios have been extensively studied in signal processing field (e.g. in tasks such as image reconstruction), less attention has been devoted to low signal-tonoise classification scenarios. Thus, in this paper we identified an unexplored machine learning problem, namely image classification in low and very low signal-to-noise ratios. In order to study such scenarios, we built two datasets that allowed us to perform controlled experiments by manipulating the input image signal-to-noise ratio and highlighted that CNNs struggle to show good generalization for low and very low signal-to-noise ratios even for a relatively elementary MNIST-based dataset. Finally, we ran a series of controlled experiments 9 that explore both a variety of CNNs' architectural choices and the importance of training data scale for the low and very low signal-to-noise classification. One of our main observation was that properly designed CNNs can be trained in low O2I regime without using any pixel-level annotations and generalize if we leverage enough training data; however, the amount of training data required for the model to generalize scales rapidly with the inverse of the O2I ratio. Thus, with our paper (and the code release) we invite the community to work on data-efficient solutions to low and very low signal-to-noise classification. Our experimental study exhibits limitations: First, due to the lack of large scale datasets that allow for explicit control of the input signal-to-noise ratios, we were forced to use the synthetically built nMNIST dataset for most of our analysis. As a real life dataset, we used crops from the histopathology CAMELYON dataset; however, due to relatively a small number of unique lesions we were unable to scale the histopathology experiments to the extent as the nMNIST experiments, and, as , some might be affected by the limited dataset size. Other large scale computer vision datasets like MS COCO exhibit correlations of the object of interest with the image . For MS COCO, the smallest O2I ratios are for the object category "sports ball" which on average occupies between 0.3% and 0.4% of an image and its presence tends to be correlated with the image (e. g. presence of sports fields and players). However, future research could examine a setup in which negative images contain objects of the categories "person" and "baseball bat" and positive images also contain "sports ball". Second, all the tested models improve the generalization with larger dataset sizes; however, scaling datasets such as CAMELYON to tens of thousands of samples might be prohibitively expensive. Instead, further research should be devoted to developing computationally-scalable, data-efficient inductive biases that can handle very low signal-to-noise ratios with limited dataset sizes. Future work, could explore the knowledge of the low O2I ratio and therefore sparse signal as an inductive bias. Finally, we studied low signal-to-noise scenarios only for binary classification scenarios 10; further investigation should be devoted to multiclass problems. We hope that this study will stimulate the research in image classification for low signal-to-noise input scenarios. In this section, we provide additional details about the datasets used in our experiments. The needle MNIST (nMNIST) dataset is designed as a binary classification problem: Is there a 3 in this image?'. To generate nMINST, we use the original training, validation and testing splits of the MNIST dataset and generate different nMINST subsets by varying the object-to-image (O2I) ratio, ing in O2I ratios of 19.1%, 4.8%, 1.2%, 0.3%, and 0.075%. We define positive images as the ones containing exactly one digit 3 and negative images as images without any instance of it. We keep the original MNIST digit size and place digits randomly onto a clear canvas to generate a sample of the nMNIST dataset. More precisely, we adapt the O2I ratio by changing the the canvas size, ing in nMNIST image resolution being in 64 × 64, 128 × 128, 256 × 256, 512 × 512, and 1024 × 1024 pixels. To assign MNIST digits to canvas, we split the MNIST digits into two subsets: digit-3 versus clutter (any digit from a set of {0, 1, 2, 4, 5, 6, 7, 8, 9}). For the positive nMNIST images, we sample one digit 3 (without replacement) and n digits (with replacement) from the digit-3 and clutter subsets, respectively. For the negative nMNIST images, we sample n + 1 instances from the clutter subset. We adapt n to keep approximately constant object density for all canvas and choose n to be 2, 5, 25, 100, and 400 for canvas resolutions 64 × 64, 128 × 128, 256 × 256, 512 × 512, and 1024 × 1024, respectively. As , for each value of O2I ratio, we obtain 11276, 1972, 4040 of training, validation and testing images, out of which 50% are negative and 50% are positive images. We present both positive and negative samples for different O2I ratios in Figure 10. The needle CAMELYON (nCAMELYON) is designed as a binary classification task: Are there breast cancer metastases in the image or not?. We rely on the pixel-level annotations within CAMELYON to extract samples for nCAMELYON. We use downsampling level 3 from the original whole slide image using the MultiResolution Image interface released with the original CAMELYON dataset. For positive examples, we identify contiguous regions within the annotations, and take 50 random crops around each contiguous region ensuring that the full contiguous region is inside the crop, and total number of lesion pixels inside the crop are in the desired O2I ratio. The negative crops are taken from healthy images randomly filtering for images that are mostly using a heuristic that the average green pixel value in the crop is below 200. Since the CAMELYON dataset contains images acquired by 5 different centers, we split training, validation and test sets center-wise to avoid any contamination of data across the three sets. All crops coming from center 3 are part of the validation set, and all crops coming from center 4 are part of the test set. All images are generated for resolutions 128 × 128, 256 × 256, 512 × 512, and 1024 × 1024 and are split into 4 different O2I ratios: (100 − 50)%, (50 − 10)%, (10 − 1)%, and (1 − 0.1)%. Figure 11 shows examples of images from nCAMELYON dataset, Table 1 presents number of unique lesions in each dataset, and Table 2 depicts number of dataset images stratified for image resolution and O2I ratios. Because center 3 does not contain lesions of suitable size for crops of with resolution 128 × 128 and O2I ratio (50 − 100)%, we do not include those training runs in our analysis. In this section, we provide additional details about the pipeline used in the experiments. More precisely, we formally define global pooling operations and provide detailed description of the different architectures. In our experiments, we are testing four different global pooling functions: max-pooling, meanpooling, logsumexp and soft attention. The max pooling operation simply returns the maximum value per each channel in the topological embedding. This operation can be formally defined as: Note, that we use subscript notation to denote dimensions of the embedding. The max pooling operation has a spacing effect on gradient backpropagation, during the backward pass through the model all information will be propagated through the embedding position that corresponds to the maximal value. In order to improve gradient backpropagation, one could apply logsumexp pooling, a soft approximation to max pooling. This pooling operation is defined as: Alternatively, one could use an average pooling operation that computes mean value for each channel in the topological embedding. This pooling operation can be formally defined as follows: Finally, attention based pooling include additional weighting tensor a of dimension (w enc × h enc × c enc) that rescales each topological embedding before averaging them. This operation can be formally defined as: s.t. In our experiments, following , we parametrize the soft-attention mechanisms as a [w,h] = sof tmax(f (E spat)) [w,h], where f (·) is modelled by two fully connected layers with tanh-activation and 128 hidden units. We adapt the BagNet architecture proposed in . An overview of the architectures for the tested three receptive field sizes is shown in Table 3. We depict the layers of Figure 12: Impact of the training set balance on model accuracy for different pooling operations and receptive field sizes. residual blocks in brackets and perform downsampling using convolutions with stride 2 within the first residual block. Note that the architectures for different receptive fields differ in the number of 3 × 3 convolutions. The rightmost column shows a regular ResNet-50 model. The receptive field is decreased by replacing 3 × 3 convolutions with 1 × 1 convolutions. We increase the number of convolution filters by a factor of 2.5 if the receptive field is reduced to account for the loss of the trainable parameters. Moreover, when testing different network capacities we evenly scale the number of convolutional filters by multiplying with a constant factor of s ∈ {1/4, 1/2, 1, 2}. In this section, we provide additional experimental as well as additional visualizations of the experiments presented in the main body of the paper. In many medical imaging datasets, it is common to be faced with class-imbalanced datasets. Therefore, in this experiment, we use our nMNIST dataset and test CNNs generalization under moderate and severe class imbalanced scenario. We alter the training set class balance by altering the proportion of positive images in the training dataset and use the following balance values 0.01, 0.1, 0.25, 0.5, 0.75, 0.9 and 0.99, where a value of 0.01 means almost no positive examples and 0.99 indicates very low number of negative images available at training time. Moreover, we ensure that the dataset size is constant (≈ 11k) and only the class-balance is modified. We run the experiments using the O2I ratio of 1.2%, three receptive field sizes (9 × 9, 33 × 33 and 177 × 177 pixels) and four pooling operations (mean, max, logsumexp and soft attention). For each balance value, we train 6 models using 6 random seeds and we oversample the underrepresented class. The are depicted in Figure 12. We observe that the model performance drops as the the training data becomes more unbalanced and that max pooling and logsumexp seem to be the most robust to the class imbalance. We also tested the effect of model capacity increase while having access only to a small dataset (3k class-balanced images) and contrast it with a larger dataset of ≈ 11k training images. We run this experiment on the nMNIST dataset using a network with 2.3 · 10 7 parameters using global max pooling operation and there different receptive field sizes: 9 × 9, 33 × 33 and 177 × 177 pixels. The are depicted in Figure 13. It can be seen that the model's capacity increase does not lead to better generalization, for small size datasets of ≈ 3k. In this section, we report additional for all tested global pooling operations on O2I limit vs. dataset size. We plot a heatmaps representing the validation set for all considered 02I and training set sizes (Figure 14) as well as the minimum number of training examples required to achieve a validation accuracy of 70% and 85% (Figure 15) Figure 13: Impact of the network capacity on the generalization performance dependent on the training set size for nMNIST at O2I ratio = 1.2%. The improvement based on the increased network capacity shrinks with smaller training set. Figure 14: Testing the O2I limit. Validation set accuracy heatmap for max, logsumexp, mean and soft attention poolings. We test training set sizes ∈ {1400, 2819, 5638, 7500, 11276, 22552} and report the average validation accuracy. We test the object localization capabilities of the trained classification models by examining their saliency maps. Figure 16 shows examples of the nMNIST dataset with the object bounding box in Figure 15: Testing the O2I limit. Minimum required training set size to achieve the noted validation accuracy. We test training set sizes ∈ {1400, 2819, 5638, 7500, 11276, 22552} and report the minimum amount of training examples that achieve a specific validation performance pooling over different network capacities. Figure 16: Example images from the nMNIST validation set and their corresponding saliency maps in red. We generate the saliency maps by calculating the absolute of the gradients with respect to the input image using max-pooling, a receptive field of 33, and ResNet-50 capacity. From top to bottom, we show random examples for O2I ratios of {19.14, 4.79, 1.20, 0.30}%. We annotate the object of interest with a blue outline. The captions show the true label y and the predictionŷ. Figure 17: Average precision for detecting the object of interest using the saliency maps for nMNIST. We adapt and use the localize an object by the maximum magnitude of the saliency. We use the magnitude of the saliency as the confidence of the detection. We count wrongly localised objects both as false positive and false negative. For images without object of interest, the we increase the false positive count only. We plot for max-pooling, a receptive field of 33, a training set with 11276 examples and ResNet-50 capacity. (a) shows the dependence of the AP on the pooling method using RF = 33 × 33, (b) shows the dependence on the receptive field using max-pooling. blue and the magnitude of the saliency in red. We rescale the saliency to for better contrast. However, this prevents the comparison of absolute saliency values across different images. In samples containing an object of interest, the models correctly assign high saliency to the regions surrounding the relevant object. On negative examples, the network assigns homogenous importance to all objects. We localise an object of interest as the location with maximum saliency. We follow to quantitatively examine the object detection performance using the saliency maps of the models. We plot the corresponding average precision in Figure 17. We find that the detection performance deteriorates for smaller O2I ratios regardless of the method. This is aligned with the classification accuracy. For small O2I ratios, max-pooling achieves the best detection scores. On larger O2I ratios, logsumexp achieves the best scores. | We study low- and very-low-signal-to-noise classification scenarios, where objects that correlate with class label occupy tiny proportion of the entire image (e.g. medical or hyperspectral imaging). | 680 | scitldr |
Recent trends of incorporating attention mechanisms in vision have led researchers to reconsider the supremacy of convolutional layers as a primary building block. Beyond helping CNNs to handle long-range dependencies, showed that attention can completely replace convolution and achieve state-of-the-art performance on vision tasks. This raises the question: do learned attention layers operate similarly to convolutional layers? This work provides evidence that attention layers can perform convolution and, indeed, they often learn to do so in practice. Specifically, we prove that a multi-head self-attention layer with sufficient number of heads is at least as expressive as any convolutional layer. Our numerical experiments then show that self-attention layers attend to pixel-grid patterns similarly to CNN layers, corroborating our analysis. Our code is publicly available. Recent advances in Natural Language Processing (NLP) are largely attributed to the rise of the transformer . Pre-trained to solve an unsupervised task on large corpora of text, transformer-based architectures, such as GPT-2 , BERT and Transformer-XL, seem to possess the capacity to learn the underlying structure of text and, as a consequence, to learn representations that generalize across tasks. The key difference between transformers and previous methods, such as recurrent neural networks and convolutional neural networks (CNN), is that the former can simultaneously attend to every word of their input sequence. This is made possible thanks to the attention mechanism-originally introduced in Neural Machine Translation to better handle long-range dependencies . With self-attention in particular, the similarity of two words in a sequence is captured by an attention score measuring the distance of their representations. The representation of each word is then updated based on those words whose attention score is highest. Inspired by its capacity to learn meaningful inter-dependencies between words, researchers have recently considered utilizing self-attention in vision tasks. Self-attention was first added to CNN by either using channel-based attention or non-local relationships across the image . More recently, augmented CNNs by replacing some convolutional layers with self-attention layers, leading to improvements on image classification and object detection tasks. noticed that, even though state-of-the art are reached when attention and convolutional features are combined, under same computation and model size constraints, self-attention-only architectures also reach competitive image classification accuracy. These findings raise the question, do self-attention layers process images in a similar manner to convolutional layers? From a theoretical perspective, one could argue that transfomers have the capacity to simulate any function-including a CNN. Indeed, Pérez et al. showed that a multilayer attention-based architecture with additive positional encodings is Turing complete under some strong theoretical assumptions, such as unbounded precision arithmetic. Unfortunately, universality do not reveal how a machine solves a task, only that it has the capacity to do so. Thus, the question of how self-attention layers actually process images remains open. We here recall the mathematical formulation of self-attention layers and emphasize the role of positional encodings. Let X ∈ R T ×Din be an input matrix consisting of T tokens in of D in dimensions each. While in NLP each token corresponds to a word in a sentence, the same formalism can be applied to any sequence of T discrete objects, e.g. pixels. A self-attention layer maps any query token t ∈ [T] from D in to D out dimensions as follows: Self-Attention(X) t,::= softmax (A t,:) XW val, where we refer to the elements of the T × T matrix A:= XW qry W key X as attention scores and the softmax output 3 as attention probabilities. The layer is parametrized by a query matrix W qry ∈ R Din×D k, a key matrix W key ∈ R Din×D k and a value matrix W val ∈ R Din×Dout.For simplicity, we exclude any residual connections, batch normalization and constant factors. A key property of the self-attention model described above is that it is equivariant to reordering, that is, it gives the same output independently of how the T input tokens are shuffled. This is problematic for cases we expect the order of things to matter. To alleviate the limitation, a positional encoding is learned for each token in the sequence (or pixel in an image), and added to the representation of the token itself before applying self-attention where P ∈ R T ×Din contains the embedding vectors for each position. More generally, P may be substituted by any function that returns a vector representation of the position. It has been found beneficial in practice to replicate this self-attention mechanism into multiple heads, each being able to focus on different parts of the input by using different query, key and value matrices. In multi-head self-attention, the output of the N h heads of output dimension D h are concatenated and projected to dimension D out as follows: and two new parameters are introduced: the projection matrix W out ∈ R N h D h ×Dout and a bias term b out ∈ R Dout. Convolutional layers are the de facto choice for building neural networks that operate on images. We recall that, given an image tensor X ∈ R W ×H×Din of width W, height H and D in channels, the output of a convolutional layer for pixel (i, j) is given by where W is the K × K × D in × D out weight tensor 4, b ∈ R Dout is the bias vector and the set contains all possible shifts appearing when convolving the image with a K × K kernel. In the following, we review how self-attention can be adapted from 1D sequences to images. With images, rather than tokens, we have query and key pixels q, k. Accordingly, the input is a tensor X of dimension W × H × D in and each attention score associates a query and a key pixel. To keep the formulas consistent with the 1D case, we abuse notation and slice tensors by using a 2D index vector: if p = (i, j), we write X p,: and A p,: to mean X i,j,: and A i,j,:,:, respectively. With this notation in place, the multi-head self attention layer output at pixel q can be expressed as follows: and accordingly for the multi-head case. There are two types of positional encoding that has been used in transformer-based architectures: the absolute and relative encoding (see also Table 3 in the Appendix). With absolute encodings, a (fixed or learned) vector P p,: is assigned to each pixel p. The computation of the attention scores we saw in eq. can then be decomposed as follows: A abs q,k = (X q,: + P q,:)W qry W key (X k,: + P k,:) = X q,: W qry W key X k,: + X q,: W qry W key P k,: + P q,: W qry W key X k,: + P q,: W qry W key P k,: where q and k correspond to the query and key pixels, respectively. The relative positional encoding was introduced by. The main idea is to only consider the position difference between the query pixel (pixel we compute the representation of) and the key pixel (pixel we attend) instead of the absolute position of the key pixel: In this manner, the attention scores only depend on the shift δ:= k − q. Above, the learnable vectors u and v are unique for each head, whereas for every shift δ the relative positional encoding r δ ∈ R Dp is shared by all layers and heads. Moreover, now the key weights are split into two types: W key pertain to the input and W key to the relative position of pixels. This section derives sufficient conditions such that a multi-head self-attention layer can simulate a convolutional layer. Our main is the following: Theorem 1. A multi-head self-attention layer with N h heads of dimension D h, output dimension D out and a relative positional encoding of dimension D p ≥ 3 can express any convolutional layer of kernel size The theorem is proven constructively by selecting the parameters of the multi-head self-attention layer so that the latter acts like a convolutional layer. In the proposed construction, the attention scores of each self-attention head should attend to a different relative shift within the set ∆ ∆ K = {− K/2, . . ., K/2} 2 of all pixel shifts in a K × K kernel. The exact condition can be found in the statement of Lemma 1. Then, Lemma 2 shows that the aforementioned condition is satisfied for the relative positional encoding that we refer to as the quadratic encoding: The learned parameters 2 ) and α (h) determine the center and width of attention of each head, respectively. On the other hand, δ = (δ 1, δ 2) is fixed and expresses the relative shift between query and key pixels. It is important to stress that the above encoding is not the only one for which the conditions of Lemma 1 are satisfied. In fact, in our experiments, the relative encoding learned by the neural network also matched the conditions of the lemma (despite being different from the quadratic encoding). Nevertheless, the encoding defined above is very efficient in terms of size, as only D p = 3 dimensions suffice to encode the relative position of pixels, while also reaching similar or better empirical performance (than the learned one). The theorem covers the general convolution operator as defined in eq.. However, machine learning practitioners using differential programming frameworks might question if the theorem holds for all hyper-parameters of 2D convolutional layers: • Padding: a multi-head self-attention layer uses by default the "SAME" padding while a convolutional layer would decrease the image size by K − 1 pixels. The correct way to alleviate these boundary effects is to pad the input image with K/2 zeros on each side. In this case, the cropped output of a MHSA and a convolutional layer are the same. • Stride: a strided convolution can be seen as a convolution followed by a fixed pooling operation-with computational optimizations. Theorem 1 is defined for stride 1, but a fixed pooling layer could be appended to the Self-Attention layer to simulate any stride. • Dilation: a multi-head self-attention layer can express any dilated convolution as each head can attend a value at any pixel shift and form a (dilated) grid pattern. Remark for the 1D case. Convolutional layers acting on sequences are commonly used in the literature for text , as well as audio (van den) and time series . Theorem 1 can be straightforwardly extended to show that multi-head self-attention with N h heads can also simulate a 1D convolutional layer with a kernel of size K = N h with min(D h, D out) output channels using a positional encoding of dimension D p ≥ 2. Since we have not tested empirically if the preceding construction matches the behavior of 1D self-attention in practice, we cannot claim that it actually learns to convolve an input sequence-only that it has the capacity to do so. The proof follows directly from Lemmas 1 and 2 stated below: be a bijective mapping of heads onto shifts. Further, suppose that for every head the following holds: Then, for any convolutional layer with a K × K kernel and D out output channels, there exists {W Attention maps for pixel val . We show attention maps computed for a query pixel at position q. Proof. Our first step will be to rework the expression of the Multi-Head Self-Attention operator from equation and equation such that the effect of the multiple heads becomes more transparent: Note that each head's value matrix W (h) val ∈ R Din×D h and each block of the projection matrix W out of dimension D h × D out are learned. Assuming that D h ≥ D out, we can replace each pair of matrices by a learned matrix W (h) for each head. We consider one output pixel of the multi-head self-attention: Due to the conditions of the Lemma, for the h-th attention head the attention probability is one when k = q − f (h) and zero otherwise. The layer's output at pixel q is thus equal to For K = √ N h, the above can be seen to be equivalent to a convolutional layer expressed in eq. 17: there is a one to one mapping (implied by map f) between the matrices 2. Remark about D h and D out. It is frequent in transformer-based architectures to set In that case, W (h) can be seen to be of rank D out − D h, which does not suffice to express every convolutional layer with D out channels. Nevertheless, it can be seen that any D h out of D out outputs of MHSA(X) can express the output of any convolutional layer with D h output channels. To cover both cases, in the statement of the main theorem we assert that the output channels of the convolutional layer should be min(D h, D out). In practice, we advise to concatenate heads of dimension D h = D out instead of splitting the D out dimensions among heads to have exact re-parametrization and no "unused" channels. Lemma 2. There exists a relative encoding scheme {r δ ∈ R Dp} δ∈Z 2 with D p ≥ 3 and parame- Proof. We show by construction the existence of a D p = 3 dimensional relative encoding scheme yielding the required attention probabilities. As the attention probabilities are independent of the input tensor X, we set W key = W qry = 0 which leaves only the last term of eq.. Setting W key ∈ R D k ×Dp to the identity matrix (with appropriate row padding), yields A q,k = v r δ where δ:= k − q. Above, we have assumed that D p ≤ D k such that no information from r δ is lost. Now, suppose that we could write: for some constant c. In the above expression, the maximum attention score over A q,: is −αc and it is reached for A q,k with δ = ∆. On the other hand, the α coefficient can be used to scale arbitrarily the difference between A q,∆ and the other attention scores. In this way, for δ = ∆, we have and for δ = ∆, the equation becomes lim α→∞ softmax(A q,:) k = 0, exactly as needed to satisfy the lemma statement. What remains is to prove that there exist v and {r δ} δ∈Z 2 for which eq. holds. Expanding the RHS of the equation, we have which matches eq. with c = − ∆ 2 and the proof is concluded. Remark on the magnitude of α. The exact representation of one pixel requires α (or the matrices W qry and W key) to be arbitrary large, despite the fact that the attention probabilities of all other pixels converge exponentially to 0 as α grows. Nevertheless, practical implementations always rely on finite precision arithmetic for which a constant α suffices to satisfy our construction. For instance, since the smallest positive float32 scalar is approximately 10 −45, setting α = 46 would suffice to obtain hard attention. The aim of this section is to validate the applicability of our theoretical -which state that self-attention can perform convolution-and to examine whether self-attention layers in practice do actually learn to operate like convolutional layers when trained on standard image classification tasks. In particular, we study the relationship between self-attention and convolution with quadratic and learned relative positional encodings. We find that, for both cases, the attention probabilities learned tend to respect the conditions of Lemma 1, supporting our hypothesis. We study a fully attentional model consisting of six multi-head self-attention layers. As it has already been shown by that combining attention features with convolutional features improves performance on Cifar-100 and ImageNet, we do not focus on attaining state-of-the-art performance. Nevertheless, to validate that our model learns a meaningful classifier, we compare it to the standard ResNet18 on the CIFAR-10 dataset (Krizhevsky et al.). In all experiments, we use a 2 × 2 invertible down-sampling on the input to reduce the size of the image. As the size of the attention coefficient tensors (stored during forward) scales quadratically with the size of the input image, full attention cannot be applied to bigger images. The fixed size representation of the input image is computed as the average pooling of the last layer representations and given to a linear classifier. We used the PyTorch library and based our implementation on PyTorch Transformers 5. We release our code on Github 6 and hyper-parameters are listed in Table 2 (Appendix). Remark on accuracy. To verify that our self-attention models perform reasonably well, we display in Figure 6 the evolution of the test accuracy on CIFAR-10 over the 300 epochs of training for our self-attention models against a small ResNet (Table 1). The ResNet is faster to converge, but we cannot ascertain whether this corresponds to an inherent property of the architecture or an artifact of the adopted optimization procedures. Our implementation could be optimized to exploit the locality of Gaussian attention probabilities and reduce significantly the number of FLOPS. We observed that learned embeddings with content-based attention were harder to train probably due to their increased number of parameters. We believe that the performance gap can be bridged to match the ResNet performance, but this is not the focus of this work. As a first step, we aim to verify that, with the relative position encoding introduced in equation, attention layers learn to behave like convolutional layers. We train nine attention heads at each layer to be on par with the 3 × 3 kernels used predominantly by the ResNet architecture. The center of attention of each head h is initialized to ∆ (h) ∼ N (0, 2I 2). Figure 3 shows how the initial positions of the heads (different colors) at layer 4 changed during training. We can see that after optimization, the heads attend on specific pixel of the image forming a grid around the query pixel. Our intuition that Self-Attention applied to images learns convolutional filters around the queried pixel is confirmed. Figure 4 displays all attention head at each layer of the model at the end of the training. It can be seen that in the first few layers the heads tend to focus on local patterns (layers 1 and 2), while deeper layers (layers 3-6) also attend to larger patterns by positioning the center of attention further from the queried pixel position. We also include in the Appendix a plot of the attention positions for a higher number of heads (N h = 16). Figure 14 displays both local patterns similar to CNN and long range dependencies. Interestingly, attention heads do not overlap and seem to take an arrangement maximizing the coverage of the input space. Figure 4: Centers of attention of each attention head (different colors) for the 6 self-attention layers using quadratic positional encoding. The central black square is the query pixel, whereas solid and dotted circles represent the 50% and 90% percentiles of each Gaussian, respectively. We move on to study the positional encoding used in practice by fully-attentional models on images. We implemented the 2D relative positional encoding scheme used by (; : we learn a D p /2 position encoding vector for each row and each column pixel shift. Hence, the relative positional encoding of a key pixel at position k with a query pixel at position q is the concatenation of the row shift embedding δ 1 and the column shift embedding δ 2 (where δ = k − q). We chose D p = D out = 400 in the experiment. We differ from their (unpublished) implementation in the following points: (i) we do not use convolution stem and ResNet bottlenecks for downsampling, but only a 2 × 2 invertible downsampling layer At first, we discard the input data and compute the attention scores solely as the last term of eq.. The attention probabilities of each head at each layer are displayed on Figure 5. The figure confirms our hypothesis for the first two layers and partially for the third: even when left to learn the positional encoding scheme from randomly initialized vectors, certain self-attention heads (depicted on the left) learn to attend to individual pixels, closely matching the condition of Lemma 1 and thus Theorem 1. At the same time, other heads pay attention to horizontally-symmetric but non-localized patterns, as well as to long-range pixel inter-dependencies. We move on to a more realistic setting where the attention scores are computed using both positional and content-based attention (i.e., q k + q r in ) which corresponds to a full-blown standalone self-attention model. The attention probabilities of each head at each layer are displayed in Figure 6. We average the attention probabilities over a batch of 100 test images to outline the focus of each head and remove the dependency on the input image. Our hypothesis is confirmed for some heads of layer 2 and 3: even when left to learn the encoding from the data, certain self-attention heads only exploit positionbased attention to attend to distinct pixels at a fixed shift from the query pixel reproducing the receptive field of a convolutional kernel. Other heads use more content-based attention (see Figures 8 to 10 in Appendix for non-averaged probabilities) leveraging the advantage of Self-Attention over CNN which does not contradict our theory. In practice, it was shown by that combining CNN and self-attention features outperforms each taken separately. Our experiments shows that such combination is learned when optimizing an unconstrained fully-attentional model. The similarity between convolution and multi-head self-attention is striking when the query pixel is slid over the image: the localized attention patterns visible in Figure 6 follow the query pixel. This characteristic behavior materializes when comparing Figure 6 with the attention probabilities at a different query pixel (see Figure 7 in Appendix). Attention patterns in layers 2 and 3 are not only localized but stand at a constant shift from the query pixel, similarly to convolving the receptive field of a convolutional kernel over an image. This phenomenon is made evident on our interactive website 7. This tool is designed to explore different components of attention for diverse images with or without content-based attention. We believe that it is a useful instrument to further understand how MHSA learns to process images. Figure 6: Attention probabilities for a model with 6 layers (rows) and 9 heads (columns) using learned relative positional encoding and content-content based attention. Attention maps are averaged over 100 test images to display head behavior and remove the dependence on the input content. The black square is the query pixel. More examples are presented in Appendix A. In this section, we review the known differences and similarities between CNNs and transformers. The use of CNN networks for text-at word level or character level -is more seldom than transformers (or RNN). Transformers and convolutional models have been extensively compared empirically on tasks of Natural Language Processing and Neural Machine Translation. It was observed that transformers have a competitive advantage over convolutional model applied to text . It is only recently that; used transformers on images and showed that they achieve similar accuracy as ResNets. However, their comparison only covers performance and number of parameters and FLOPS but not expressive power. Beyond performance and computational-cost comparisons of transformers and CNN, the study of expressiveness of these architectures has focused on their ability to capture long-term dependencies. Another interesting line of research has demonstrated that transformers are Turingcomplete (; Pérez et al., 2019), which is an important theoretical but is not informative for practitioners. To the best of our knowledge, we are the first to show that the class of functions expressed by a layer of self-attention encloses all convolutional filters. The closest work in bridging the gap between attention and convolution is due to. They cast attention and convolution into a unified framework leveraging tensor outerproduct. In this framework, the receptive field of a convolution is represented by a "basis" tensor A ∈ R K×K×H×W ×H×W. For instance, the receptive field of a classical K × K convolutional kernel would be encoded by A ∆,q,k = 1{k − q = ∆} for ∆ ∈ ∆ ∆ K. The author distinguishes this index-based convolution with content-based convolution where A is computed from the value of the input, e.g., using a key/query dot-product attention. Our work moves further and presents sufficient conditions for relative positional encoding injected into the input content (as done in practice) to allow content-based convolution to express any index-based convolution. We further show experimentally that such behavior is learned in practice. We showed that self-attention layers applied to images can express any convolutional layer (given sufficiently many heads) and that fully-attentional models learn to combine local behavior (similar to convolution) and global attention based on input content. More generally, fully-attentional models seem to learn a generalization of CNNs where the kernel pattern is learned at the same time as the filters-similar to deformable convolutions . Interesting directions for future work include translating existing insights from the rich CNNs literature back to transformers on various data modalities, including images, text and time series. Jean-Baptiste Cordonnier is thankful to the Swiss Data Science Center (SDSC) for funding this work. Andreas Loukas was supported by the Swiss National Science Foundation (project "Deep Learning for Graph Structured Data", grant number PZ00P2 179981). We present more examples of attention probabilities computed by self-attention model. Figure 7 shows average attention at a different query pixel than Figure 6. Figures 8 to 10 display attention for single images. Figure 7: Attention probabilities for a model with 6 layers (rows) and 9 heads (columns) using learned relative positional encoding and content-content attention. We present the average of 100 test images. The black square is the query pixel. Figure 8: Attention probabilities for a model with 6 layers (rows) and 9 heads (columns) using learned relative positional encoding and content-content based attention. The query pixel (black square) is on the frog head. Figure 9: Attention probabilities for a model with 6 layers (rows) and 9 heads (columns) using learned relative positional encoding and content-content based attention. The query pixel (black square) is on the horse head. Figure 10: Attention probabilities for a model with 6 layers (rows) and 9 heads (columns) using learned relative positional encoding and content-content based attention. The query pixel (black square) is on the building in the . Proof. Our first step will be to rework the expression of the Multi-Head Self-Attention operator from equation and equation such that the effect of the multiple heads becomes more transparent: Note that each head's value matrix W (h) val ∈ R Din×D h and each block of the projection matrix W out of dimension D h × D out are learned. Assuming that D h ≥ D out, we can replace each pair of matrices by a learned matrix W (h) for each head. We consider one output pixel of the multi-head self-attention and drop the bias term for simplicity: with a q,: ) k. We rewrite the output of a convolution at pixel q in the same manner: Equality between equations and has a restricted support: only the columns associated with a pixel shift ∆ ∈ ∆ ∆ K in the receptive field of pixel q can be non-zero. This leads to the factorization Figure 11 where W conv ∈ R Necessary. Assume there exists x ∈ R HW such that x ∈ row(E q) and x ∈ row(A q) and set x to be a row of V We noticed the similarity of the attention probabilities in the quadratic positional encoding (Section 3) to isotropic bivariate Gaussian distributions with bounded support: Building on this observation, we further extended our attention mechanism to non-isotropic Gaussian distribution over pixel positions. Each head is parametrized by a center of attention ∆ and a covariance matrix Σ to obtain the following attention scores, where, once more, δ = k − q. The last term can be discarded because the softmax is shift invariant and we rewrite the attention coefficient as a dot product between the head target vector v and the relative position encoding r δ (consisting of the first and second order combinations of the shift in pixels δ): Evaluation. We trained our model using this generalized quadratic relative position encoding. We were curious to see if, using the above encoding the self-attention model would learn to attend to non-isotropic groups of pixels-thus forming unseen patterns in CNNs. Each head was parametrized by ∆ ∈ R 2 and Σ −1/2 ∈ R 2×2 to ensure that the covariance matrix remained positive semi-definite. We initialized the center of attention to ∆ (h) ∼ N (0, 2I 2) and Σ −1/2 = I 2 + N (0, 0.01I 2) so that initial attention probabilities were close to an isotropic Gaussian. Figure 12 shows that the network did learn non-isotropic attention probability patterns, especially in high layers. Nevertheless, the fact that we do not obtain any performance improvement seems to suggest that attention non-isotropy is not particularly helpful in practice-the quadratic positional encoding suffices. Figure 12: Centers of attention of each attention head (different colors) for the 6 self-attention layers using non-isotropic Gaussian parametrization. The central black square is the query pixel, whereas solid and dotted circles represent the 50% and 90% percentiles of each Gaussian, respectively. Pruning degenerated heads. Some non-isotropic attention heads attend on "non-intuitive" patches of pixels: either attending a very thin stripe of pixels, when Σ −1 was almost singular, or attending all pixels uniformly, when Σ −1 was close to 0 (i.e. constant attention scores). We asked ourselves, are such attention patterns indeed useful for the model or are these heads degenerated and unused? To find out, we pruned all heads having largest eigen-values smaller than 10 −5 or condition number (ratio of the biggest and smallest eigen-values) greater than 10 5. Specifically in our model with 6-layer and 9-heads each, we pruned heads from the first to the last layer. This means that these layers cannot express a 3 × 3 kernel anymore. As shown in yellow on fig. 2, this ablation initially hurts a bit the performance, probably due to off biases, but after a few epochs of continued training with a smaller learning rate (divided by 10) the accuracy recovers its unpruned value. Hence, without sacrificing performance, we reduce the size of the parameters and the number of FLOPS by a fourth. For completeness, we also tested increasing the number of heads of our architecture from 9 to 16. Similar to Figure 4, we see that the network distinguishes two main types of attention patterns. Localized heads (i.e., those that attend to nearly individual pixels) appear more frequently in the first few layers. The self-attention layer uses these heads to act in a manner similar to how convolutional layers do. Heads with less-localized attention become more common at higher layers. | A self-attention layer can perform convolution and often learns to do so in practice. | 681 | scitldr |
We introduce a “learning-based” algorithm for the low-rank decomposition problem: given an $n \times d$ matrix $A$, and a parameter $k$, compute a rank-$k$ matrix $A'$ that minimizes the approximation loss $||A- A'||_F$. The algorithm uses a training set of input matrices in order to optimize its performance. Specifically, some of the most efficient approximate algorithms for computing low-rank approximations proceed by computing a projection $SA$, where $S$ is a sparse random $m \times n$ “sketching matrix”, and then performing the singular value decomposition of $SA$. We show how to replace the random matrix $S$ with a “learned” matrix of the same sparsity to reduce the error. Our experiments show that, for multiple types of data sets, a learned sketch matrix can substantially reduce the approximation loss compared to a random matrix $S$, sometimes by one order of magnitude. We also study mixed matrices where only some of the rows are trained and the remaining ones are random, and show that matrices still offer improved performance while retaining worst-case guarantees. The success of modern machine learning made it applicable to problems that lie outside of the scope of "classic AI". In particular, there has been a growing interest in using machine learning to improve the performance of "standard" algorithms, by fine-tuning their behavior to adapt to the properties of the input distribution, see e.g.,. This "learning-based" approach to algorithm design has attracted a considerable attention over the last few years, due to its potential to significantly improve the efficiency of some of the most widely used algorithmic tasks. Many applications involve processing streams of data (video, data logs, customer activity etc) by executing the same algorithm on an hourly, daily or weekly basis. These data sets are typically not "random" or "worst-case"; instead, they come from some distribution which does not change rapidly from execution to execution. This makes it possible to design better algorithms tailored to the specific data distribution, trained on past instances of the problem. The method has been particularly successful in the context of compressed sensing. In the latter framework, the goal is to recover an approximation to an n-dimensional vector x, given its "linear measurement" of the form Sx, where S is an m × n matrix. Theoretical show that, if the matrix S is selected at random, it is possible to recover the k largest coefficients of x with high probability using a matrix S with m = O(k log n) rows. This guarantee is general and applies to arbitrary vectors x. However, if vectors x are selected from some natural distribution (e.g., they represent images), recent works show that one can use samples from that distribution to compute matrices S that improve over a completely random matrix in terms of the recovery error. Compressed sensing is an example of a broader class of problems which can be solved using random projections. Another well-studied problem of this type is low-rank decomposition: given an n × d matrix A, and a parameter k, compute a rank-k matrix Low-rank approximation is one of the most widely used tools in massive data analysis, machine learning and statistics, and has been a subject of many algorithmic studies. In particular, multiple algorithms developed over the last decade use the "sketching" approach, see e.g.,. Its idea is to use efficiently computable random projections (a.k.a., "sketches") to reduce the problem size before performing low-rank decomposition, which makes the computation more space and time efficient. For example, show that if S is a random matrix of size m × n chosen from an appropriate distribution, for m depending on, then one can recover a rank-k matrix A such that by performing an SVD on SA ∈ R m×d followed by some post-processing. Typically the sketch length m is small, so the matrix SA can be stored using little space (in the context of streaming algorithms) or efficiently communicated (in the context of distributed algorithms). Furthermore, the SVD of SA can be computed efficiently, especially after another round of sketching, reducing the overall computation time. See the survey for an overview of these developments. In light of the aforementioned work on learning-based compressive sensing, it is natural to ask whether similar improvements in performance could be obtained for other sketch-based algorithms, notably for low-rank decompositions. In particular, reducing the sketch length m while preserving its accuracy would make sketch-based algorithms more efficient. Alternatively, one could make sketches more accurate for the same values of m. This is the problem we address in this paper. Our Results. Our main finding is that learned sketch matrices can indeed yield (much) more accurate low-rank decompositions than purely random matrices. We focus our study on a streaming algorithm for low-rank decomposition due to, described in more detail in Section 2. Specifically, suppose we have a training set of matrices Tr = {A 1, . . ., A N} sampled from some distribution D. Based on this training set, we compute a matrix S * that (locally) minimizes the empirical loss where SCW(S *, A i) denotes the output of the aforementioned Sarlos-Clarkson-Woodruff streaming low-rank decomposition algorithm on matrix A i using the sketch matrix S *. Once the the sketch matrix S * is computed, it can be used instead of a random sketch matrix in all future executions of the SCW algorithm. We demonstrate empirically that, for multiple types of data sets, an optimized sketch matrix S * can substantially reduce the approximation loss compared to a random matrix S, sometimes by one order of magnitude (see Figure 1). Equivalently, the optimized sketch matrix can achieve the same approximation loss for lower values of m. A possible disadvantage of learned sketch matrices is that an algorithm that uses them no longer offers worst-case guarantees. As a , if such an algorithm is applied to an input matrix that does not conform to the training distribution, the might be worse than if random matrices were used. To alleviate this issue, we also study mixed sketch matrices, where (say) half of the rows are trained and the other half are random. We observe that if such matrices are used in conjunction with the SCW algorithm, its are no worse than if only the random part of the matrix was used 2. Thus, the ing algorithm inherits the worst-case performance guarantees of the random part of the sketching matrix. At the same time, we show that mixed matrices still substantially reduce the approximation loss compared to random ones, in some cases nearly matching the performance of "pure" learned matrices with the same number of rows. Thus, mixed random matrices offer "the best of both worlds": improved performance for matrices from the training distribution, and worst-case guarantees otherwise. Notation. Consider a distribution D on matrices A ∈ R n×d. We define the training set as {A 1, · · ·, A N} sampled from D. For matrix A, its singular value decomposition (SVD) can be written as A = U ΣV such that both U and V have orthonormal columns and Σ = diag{λ 1, · · ·, λ d} is a diagonal matrix with nonnegative entries. In many applications it is quicker and more economical to Algorithm 1 Rank-k approximation of a matrix A using a sketch matrix S (from Section 4.1.1 of How sketching works. We start by describing the SCW algorithm (Algorithm 1) for low-rank approximation. The algorithm computes the SVD(SA):= U ΣV, and compute the best rank-k approximation of AV. Finally it outputs [AV] k V as a rank-k approximation of A. Note that if m is much smaller than d and n, the space bound of this algorithm is significantly better than when computing a rank-k approximation for A in the naïve way. Thus, minimizing m automatically reduces the space usage of the algorithm. Sketching matrix. We use matrix S that is sparse. Specifically, each column of S has exactly one non-zero entry, which is either +1 or −1. This means that the fraction of non-zero entries in S is 1/m. Therefore, one can use a vector to represent S, which is very memory efficient. It is worth noting, however, after multiplying S with other matrices, the ing matrix is in general not sparse. In this section, we describe our learning-based algorithm for computing a data dependent sketch S. The main idea is to use backpropagation algorithm to compute the stochastic gradient of S with respect to the rank-k approximation loss in Equation 1, where the initial value of S is the same random sparse matrix used in SCW. Once we have the stochastic gradient, we can run stochastic gradient descent (SGD) algorithm to optimize S, in order to improve the loss. Our algorithm maintains the sparse structure of S, and only optimizes the values of the n non-zero entries (initially +1 or −1). However, the standard SVD implementation (step 2 in Algorithm 1) is not differentiable, which means we cannot get the gradient in the straightforward way. To make SVD implementation differentiable, we use the fact that the SVD procedure can be represented as m individual top singular value decompositions (see e.g. ), and that every top singular value decomposition can be computed using the power method. The full description is deferred to the long version of this paper. Due to the extremely long computational chain, it is infeasible to write down the explicit form of loss function or the gradients. However, just like how modern deep neural networks compute their gradients, we used the autograd feature in PyTorch to numerically compute the gradient with respect to the sketching matrix S. We emphasize again that our method is only optimizing S for the training phase. After S is fully trained, we still call Algorithm 1 for low rank approximation, which has exactly the same running time as the SCW algorithm, but with better performance. The main question considered in this paper is whether, for natural matrix datasets, optimizing the sketch matrix S can improve the performance of the sketching algorithm for the low-rank decomposition problem. To answer this question, we implemented and compared the following methods for computing S ∈ R m×n. • Sparse Random. Sketching matrices are generated at random as in. Specifically, we select a random hash function h:, and for all i ∈ [n], S h[i],i is selected to be either +1 or −1 with equal probability. All other entries in S are set to 0. • Dense Random. All entries in the sketching matrices are sampled from Gaussian distribution. • Learned. Using the sparse random matrix as the initialization, we optimize the sketching matrix using the training set, and return the optimized matrix. • Mixed (J). We first generate two sparse random matrices S 1, S 2 ∈ R m 2 ×n (assuming m is even), and define S to be their combination. We then optimize S using the training set, but only S 1 will be updated, while S 2 is fixed. Therefore, S is a mixture of learned matrix and random matrix, and the first matrix is trained jointly with the second one. • Mixed (S). We first compute a learned matrix S 1 ∈ R m 2 ×n using the training set, and then append another sparse random matrix S 2 to get S ∈ R m×n. Therefore, S is a mixture of learned matrix and random matrix, but the learned matrix is trained separately. Datasets. We used a variety of datasets to test the performance of our methods: • Videos 3: Logo, Friends, Eagle. We downloaded three high resolution videos from Youtube, including logo video, Friends TV show, and eagle nest cam. From each video, we collect 500 frames of size 1920 × 1080 × 3 pixels, and use 400 matrices as the training (test) set. For each frame, we resize it as a 5760 × 1080 matrix. • Hyper. We use matrices from HS-SOD, a dataset for hyperspectral images from natural scenes. Each matrix has 1024 × 768 pixels, and we use 400 matrices as the training (test) set. • Tech. We use matrices from TechTC-300, a dataset for text categorization. Each matrix has 835, 422 rows, but on average only 25, 389 of the rows contain non-zero entries. On average each matrix has 195 columns. We use 200 matrices as the training (test) set. To evaluate the quality of a sketching matrix S, it suffices to evaluate the output of Algorithm 1 using the sketching matrix S on different input matrices A. For a collection of matrices Te, we define the error of the sketch S as Err(Te, S) where the second term denotes the optimal approximation loss on Te. In our datasets, some of the matrices have much larger singular values than the others. To avoid imbalance in the dataset, we normalize the matrices so that their top singular values are all equal. We first test all methods on different datasets, with various combination of k, m. See Figure 1 for the when k = 10, m = 20. As we can see, for video datasets, learned sketching matrices can get 20× better test error than the sparse random or dense random sketching matrices. For other datasets, learned sketching matrices are still more than 2× better. We also include the test error in Table 1 for the case when k = 20, 30. In Table 2, we investigate the performance of the mixed sketching matrices by comparing them with random and learned sketching matrices. In all scenarios, mixed sketching matrices yield much better than random sketching matrices, and sometimes the are comparable to those of learned sketching matrices. This means, in most cases it suffices to train one half of the sketching matrix to obtain good empirical , and at the same time, we can use the remaining random half of the sketch matrix to obtain worst-case guarantees. | Learning-based algorithms can improve upon the performance of classical algorithms for the low-rank approximation problem while retaining the worst-case guarantee. | 682 | scitldr |
Neural conversational models are widely used in applications like personal assistants and chat bots. These models seem to give better performance when operating on word level. However, for fusion languages like French, Russian and Polish vocabulary size sometimes become infeasible since most of the words have lots of word forms. We propose a neural network architecture for transforming normalized text into a grammatically correct one. Our model efficiently employs correspondence between normalized and target words and significantly outperforms character-level models while being 2x faster in training and 20\% faster at evaluation. We also propose a new pipeline for building conversational models: first generate a normalized answer and then transform it into a grammatically correct one using our network. The proposed pipeline gives better performance than character-level conversational models according to assessor testing. Neural conversational models BID18 are used in a large number of applications: from technical support and chat bots to personal assistants. While being a powerful framework, they often suffer from high computational costs. The main computational and memory bottleneck occurs at the vocabulary part of the model. Vocabulary is used to map a sequence of input tokens to embedding vectors: one embedding vector is stored for each word in vocabulary. English is de-facto a standard language for training conversational models, mostly for a large number of speakers and simple grammar. In english, words usually have only a few word forms. For example, verbs may occur in present and past tenses, nouns can have singular and plural forms. For many other languages, however, some words may have tens of word forms. This is the case for Polish, Russian, French and many other languages. For these languages storing all forms of frequent words in a vocabulary significantly increase computational costs. To reduce vocabulary size, we propose to normalize input and output sentences by putting them into a standard form. Generated texts can then be converted into grammatically correct ones by solving morphological agreement task. This can be efficiently done by a model proposed in this work. Our contribution is two-fold:• We propose a neural network architecture for performing morphological agreement in fusion languages such as French, Polish and Russian (Section 2).• We introduce a new approach to building conversational models: generating normalized text and then performing morphological agreement with proposed model (Section 3); In this section we propose a neural network architecture for solving morphological agreement problem. We start by formally defining the morphological agreement task. Consider a grammatically correct sentence with words [a 1, a 2, . . ., a K]. Let S(a) be a function that maps any word to its standard form. For example, S("went") = "go". Goal of morphological agreement task is to learn a mapping from normalized sentence [S(a 1), S(a 2),..., S(a K)] = [a n 1,,a n 2, . . ., a n K] to initial sentence [a 1, a 2, . . ., a K]. Interestingly, reverse mapping may be performed for each word independently using specialized dictionaries. Original task, however, needs to consider dependencies between words in sequence in order to output a coherent text. An important property of this task is that the number of words, their order and meaning are explicitly contained in input sequence. To employ this knowledge, we propose a specific neural network architecture illustrated in FIG0. Network operates as follows: first all normalized words are embedded using the same character-level LSTM encoder. The goal of the next step is to incorporate global information about other words to embedding of each word. To do so we pass word embedding sequence through a bidirectional LSTM BID7. This allows new embeddings to get information from all other words: information about previous words is brought by forward LSTM and information about subsequent words is taken from backward LSTM. Finally, new embeddings are decoded with character-level LSTM decoders. At this stage we also added attention BID1 over input characters of corresponding words for better performance. High-level overview of this model resembles sequence-to-sequence BID17 network: model learns to map input characters to output characters for each word using encoder-decoder sceme. The difference is that bidirectional neural network is used to distribute information between different words. Unlike simple character-level sequence-to-sequence model, our architecture allows for much faster evaluation, since encoding and decoding phases can be done in parallel for each word. Also, one of the main advantages of our approach is that information paths from inputs to outputs are much shorter which leads to slower performance degradation as input length increases (see Section 5.3). As discussed above, we propose a two stage approach to build a neural conversational model: first generate normalized answer using normalized question and then apply Concorde model to obtain grammatically correct response. In this section we discuss a modification of Concorde model for conditioning its output on question's morphological features. A modification of Concorde model that we call Q-Concorde uses two sources of input: question and normalized answer. Question is first embedded into a single vector with character-level RNN. This vector may carry important morphological information such as time, case and plurality of questions. Question embedding is then mixed with answer embeddings using linear mapping. The final model is shown in FIG2. DISPLAYFORM0 Most frequently used models for sequence to sequence mapping rely on generating embedding vector that contains all information about input sequence. Reconstruction solely from this vector usually in worse performance as length of the output sequence increases. Attention BID19, BID14, BID1 ) partially fixes this problem, though information bottleneck of embedding vector is still high. Encoder-decoder models have mostly been applied to tasks like speech recognition BID6, BID7, BID5 ), machine translation BID1, BID17, BID2 ) and neural conversational models BID18, BID16 ).Some works have tried to perform decomposition of input sequence to obtain shorter information paths. In input is first processed character-wise and then embeddings that correspond to word endings are used for sequence-to-sequence model. This modification makes input-output information paths shorter which leads to better performance. Word inflection is a most similar task to ours, but in this task model is asked to generate a specific word form while we want our model to automatically select desired word forms. BID3 proposed a supervised approach to predicting the set of all word forms by generating transformation rules from known inflection tables. They also propose to use Conditional Random Fields for unseen base forms. Some authors have also tried to apply neural networks for this problem. BID0 and BID4 propose to use bidirectional LSTM to encode the word. Then BID4 uses different decoders for different word forms, while BID0 suggests to have one decoder and to attach morphological features to its input. Besides recurrent networks, there has been an attempt to use convolutional networks. BID15 based his work on BID4 and proposed to first pass raw data through convolutional layers and then to pass them through recurrent encoder. BID9 BID11 library. We leave first 20 words from each sentence to reduce computational costs. Concorde and Q-Concorde models consist of 2-layer LSTM encoder and decoder with hidden size 512. We compare our model to three baselines: unigram charRNN, bigram charRNN and hierarchical model. Unigram and bigram models are standard sequence-to-sequence models with attention BID14 that operate with characters or pairs of characters as tokens. We use 2-layer LSTM as an encoder. Decoder consists of 2-layer LSTM followed by attention layer and another recurrent layer. The third baseline is a hierarchical model motivated by: we first embed each word using recurrent encoder and then compute sentence embedding by running word-level encoder on these embeddings. For baselines we use layer size of 768 which in a comparable number of parameters for all models. We train models with Adam BID10 optimizer in batch size 16 with learning rate 0.0002 that halves after each 50k updates. We terminate training after 300k updates which is enough for all models to converge. To evaluate our model we used French, Russian an Polish corpuses from OpenSubtitles 2 database. We performed morphological agreement for each subtitle line independently. We estimated potential vocabulary size reduction from normalization by selecting words that appeared more than 10 times in first 10M examples. This lead to 2.5 times reduction for Polish language, 2.4 for Russian, and 1.8 for French. We evaluated our model in two metrics: word and sentence accuracies. Word accuracy shows a fraction of words that were correctly generated. Sentence accuracy corresponds to the fraction of sentences that were transformed without any mistake. Results are reported in TAB1. From four models that we compared, our model gave the best performance among all datasets, while second best model was hierarchical model. We inspected our model to show some examples where it was able to infer plural form and gender for unseen words TAB2 ). For Russian language we found out that the model was able to learn some rare rules like changing the letter я to й when going to plural form in some words: один заяц, два зайца (one rabbit, two rabbits).We can also see that our model can infer gender from words. To show that we chose feminine, masculine and neuter words and asked the model to perform agreement with word "one". This word changes its spelling for different genders in French, Polish, and Russian. Results presented in TAB3 suggest that model can indeed correctly solve this task by setting correct gender to the numeral. TAB4 we show on full sentences. Interestingly, on quite a complex Russian example our model was able to perform agreement. To select the correct form of word соседнем (neighbouring), network had to use multiple markers from different parts of a sentence: gender from подъезд (entrance) and case from в (in). As a motivation for our model we argued that making shorter input-output paths may reduce information load of the embedding vector. To check this hypothesis we computed average sentence accuracy for different input lengths and reported in FIG2.We can clearly see that all baseline models perform worse as the input length increases. However, this is not the case for our model -while character-level models perform with almost 0% accuracy when input is 100 characters long, our model still gives similar performance as for short sentences. This can be explained by the way in which models use embedding vectors. Baseline models have to share embedding capacity between all words in a sentence. Our model, however, has a separate embedding for each word and does not require the whole sentence to be squeezed into one vector. It is also clear that character-level models perform better for short sequences (about 33% of the test set). This may be the case since the capacity of the embedding vector is not fully used for them. Despite being worse for short inputs, our model can still handles many important cases quite well including those discussed in Section 5.2. To evaluate our conversation model we constructed a corpus of question-answer pairs from web site Otvet.mail.ru -general topic Russian service for questions and answers (analogue of Quora.com). The uniqueness of this corpus is that it contains general knowledge questions that allow the trained model to answer questions about movies, capitals, etc. This requires many rare entity-related words to be in the vocabulary which makes it extremely large without normalization. First we compared Q-Concorde and Concorde models to show that Q-Concorde can indeed grasp important morphological features form a context. We also trained baseline models with context concatenated to input sentence (with a special delimiter in between). word and sentence accuracies are reported in TAB5. Again, Concorde model was able to outperform baselines even though it didn't have access to the context. Also, Q-Concorde model was able to improve Concorde's performance. We inspected cases on which Q-Concorde model showed better performance than Concorde TAB6 ). In example 1 of this table, question was asked about a single object. Q-Concorde model used singular form, while Concorde used plural. Q-Concorde was also able to successfully carry correct case (example 2) and time (example 3) from the question. Some mistakes made by Q-Concorde model are shown in TAB7. In example 1, Q-Concorde wasn't able to decide whether to use polite form or not and used one word in a less polite form than another. An important property of Q-Concorde model is that it can generate different texts depending on lexical features of a question. For example, in TAB8 we changed question's tense from present simple to past simple ("what do you do?" and "what did you do?"). The model correctly generated answer in corresponding tense. We also tried to change gender of a word "did" in a question: from masculine делал ) to feminine далала. Our model used the correct gender to generate the answer. While model generated grammatically correct answers in all three cases, in the third case (past simple, masculine) model answered in less common form with a meaning that differs from expected one. Finally, we apply Q-Concorde model to a proposed pipeline for training conversational models. We compare our model with a 3-layer character-level sequence-to-sequence model which was trained on grammatically correct sentences. For generating diverse answers we train two models: one to predict answer given question and the other to predict question given answer, as suggested in BID16. This allows us to discard answers that are too general. To compare two models we set an experiment environment where assessors were asked to select one of two possible answers to the given question: one was generated by character-wise model and another was generated by our pipeline. Assessors did not know the order in which cases were shown and so they did not know which model generated the text. In 62.1% cases assessors selected the proposed model, and in the remaining 37.9% assessors preferred character-wise model. We noticed that time for processing one batch is much higher for character-level models since they need to process longer sequences sequentially. In TAB9 we report time for forward and backward pass of one batch (16 objects) and other important computational characteristics. We measured this time on GeForce GTX TITAN X graphic card. It turns out that proposed models have comparable evaluation time, but train faster than unigram and hierarchical models. In this paper we proposed a neural network model that can efficiently employ relationship between input and output words in morphological agreement task. We also proposed a modification for this model that uses context sentence. We apply this model for neural conversational model in a new pipeline: we use normalized question to generate normalized answer and then apply proposed model to obtain grammatically correct response. This model showed better performance than character level neural conversational model based on assessors responses. We achieved significant improvement comparing to character-level, bigram and hierarchical sequenceto-sequence models on morphological agreement task for Russian, French and Polish languages. Trained models seem to understand main grammatical rules and notions such as tenses, cases and pluralities. | Proposed architecture to solve morphological agreement task | 683 | scitldr |
This paper proposes the use of spectral element methods \citep{canuto_spectral_1988} for fast and accurate training of Neural Ordinary Differential Equations (ODE-Nets; \citealp{Chen2018NeuralOD}) for system identification. This is achieved by expressing their dynamics as a truncated series of Legendre polynomials. The series coefficients, as well as the network weights, are computed by minimizing the weighted sum of the loss function and the violation of the ODE-Net dynamics. The problem is solved by coordinate descent that alternately minimizes, with respect to the coefficients and the weights, two unconstrained sub-problems using standard backpropagation and gradient methods. The ing optimization scheme is fully time-parallel and in a low memory footprint. Experimental comparison to standard methods, such as backpropagation through explicit solvers and the adjoint technique \citep{Chen2018NeuralOD}, on training surrogate models of small and medium-scale dynamical systems shows that it is at least one order of magnitude faster at reaching a comparable value of the loss function. The corresponding testing MSE is one order of magnitude smaller as well, suggesting generalization capabilities increase. Neural Ordinary Differential Equations (ODE-Nets;) can learn latent models from observations that are sparse in time. This property has the potential to enhance the performance of neural network predictive models in applications where information is sparse in time and it is important to account for exact arrival times and delays. In complex control systems and model-based reinforcement learning, planning over a long horizon is often needed, while high frequency feedback is necessary for maintaining stability . Discrete-time models, including RNNs , often struggle to fully meet the needs of such applications due to the fixed time resolution. ODE-Nets have been shown to provide superior performance with respect to classic RNNs on time series forecasting with sparse training data. However, learning their parameters can be computationally intensive. In particular, ODE-Nets are memory efficient but time inefficient. In this paper, we address this bottleneck and propose a novel alternative strategy for system identification. We propose SNODE, a compact representation of ODE-Nets for system identification with full state information that makes use of a higher-order approximation of its states by means of Legendre polynomials. This is outlined in Section 4. In order to find the optimal polynomial coefficients and network parameters, we develop a novel optimization scheme, which does not require to solve an ODE at each iteration. The ing algorithm is detailed in Section 3 and is based on backpropagation (; ;) and automatic differentiation . The proposed method is fully parallel with respect to time and its approximation error reduces exponentially with the Legendre polynomial order . Summary of numerical experiments. In Section 5, our method is tested on a 6-state vehicle problem, where it is at least one order or magnitude faster in each optimizer iteration than explicit and adjoint methods, while convergence is achieved in a third of the iterations. At test time, the MSE is reduced by one order of magnitude. In Section 6, the method is used for a 30-state system consisting of identical vehicles, coupled via a known collision avoidance policy. Again, our method converges in a third of the iterations required by backpropagation thourgh a solver and each iteration is 50x faster than the fastest explicit scheme. The minimization of a scalar-valued loss function that depends on the output of an ODE-Net can be formulated as a general constrained optimization problem: where x(t) ∈ X is the state, u(t) ∈ U is the input, the loss and ODE functions L and f are given, and the parameters θ have to be learned. The spaces X and U are typically Sobolev (e.g. Hilbert) spaces expressing the smoothness of x(t) and u(t) (see Section 8). Equation can be used to represent several inverse problems, for instance in machine learning, estimation, and optimal control (; ;). Problem can be solved using gradient-based optimization through several time-stepping schemes for solving the ODE. have proposed to use the adjoint method when f is a neural network. These methods are typically relying on explicit time-stepping schemes . Limitations of these approaches are briefly summarized: Limitations of backpropagation through an ODE solver. The standard approach for solving this problem is to compute the gradients ∂L/∂θ using backpropagation through a discrete approximation of the constraints, such as Runge-Kutta methods (Runge, 1895;) or multistep solvers. This ensures that the solution remains feasible (within a numerical tolerance) at each iteration of a gradient descent method. However, it has several drawbacks: 1) the memory cost of storing intermediate quantities during backpropagation can be significant, 2) the application of implicit methods would require solving a nonlinear equation at each step, 3) the numerical error can significantly affect the solution, and 4) the problem topology can be unsuitable for optimization . Limitations of adjoint methods. ODE-Nets solve using the adjoint method, which consists of simulating a dynamical system defined by an appropriate augmented Hamiltonian , with an additional state referred to as the adjoint. In the backward pass the adjoint ODE is solved numerically to provide the gradients of the loss function. This means that intermediate states of the forward pass do not need to be stored. An additional step of the ODE solver is needed for the backward pass. This suffers from a few drawbacks: 1) the dynamics of either the hidden state or the adjoint might be unstable, due to the symplectic structure of the underlying Hamiltonian system, referred to as the curse of sensitivity in ; 2) the procedure requires solving a differential algebraic equation and a boundary value problem which is complex, time consuming, and might not have a solution . Limitations of hybrid methods. ANODE splits the problem into time batches, where the adjoint is used, storing in memory only few intermediate states from the forward pass. This allows to improve the robustness and generalization of the adjoint method. A similar improvement could be obtained using reversible integrators. However, its computational cost is of the same order of the adjoint method and it does not offer further opportunities for parallelization. Our algorithm is based on two ingredients: i) the discretization of the problem using spectral elements leading to SNODE, detailed in Section 4, and ii) the relaxation of the ODE constraint from, enabling efficient training through backpropagation. The latter can be applied directly at the continuous level and significantly reduces the difficulty of the optimization, as shown in our examples. The problem in is split into two smaller subproblems: one finds the trajectory x(t) that minimizes an unconstrained relaxation of. The other trains the network weights θ such that the trajectory becomes a solution of the ODE. Both are addressed using standard gradient descent and backpropagation. In particular, a fixed number of ADAM or SGD steps is performed for each problem in an alternate fashion, until convergence. In the following, the details of each subproblem are discussed. Step 0: Initial trajectory. The initial trajectory x(t) is chosen by solving the problem If this problem does not have a unique solution, a regularization term is added. For a quadratic loss, a closed-form solution is readily available. Otherwise, a prescribed number of SGD iterations is used. Step 1: Coordinate descent on residual. Once the initial trajectory x (t) is found, θ is computed by solving the unconstrained problem: If the value of the residual at the optimum θ * is smaller than a prescribed tolerance, then the algorithms stops. Otherwise, steps 1 and 2 are iterated until convergence. Step 2: Coordinate descent on relaxation. Once the candidate parameters θ are found, the trajectory is updated by minimizing the relaxed objective: Discussion. The proposed algorithm can be seen as an alternating coordinate gradient descent on the relaxed functional used in problem, i.e., by alternating a minimization with respect to x(t) and θ. If γ = 0, multiple minima can exist, since each choice of the parameters θ would induce a different dynamics x(t), solution of the original constraint. For γ = 0, the loss function in trades-off the ODE solution residual for the data fitting, providing a unique solution. The choice of γ implicitly introduces a satisfaction tolerance (γ), i.e., similar to regularized regression , implying that ẋ(t) − f (t, x(t); θ) ≤ (γ). Concurrently, problem reduces the residual. In order to numerically solve the problems presented in the previous section, a discretization of x(t) is needed. Rather than updating the values at time points t i from the past to the future, we introduce a compact representation of the complete discrete trajectory by means of the spectral element method. Spectral approximation. We start by representing the scalar unknown trajectory, x(t), and the known input, u(t), as truncated series: where x i, u i ∈ R and ψ i (t), ζ i (t) are sets of given basis functions that span the spaces X h ⊂ X and U h ⊂ U. In this work, we use orthogonal Legendre polynomials of order p for, where p is a hyperparameter, and the cosine Fourier basis for ζ i (t), where z is fixed. Collocation and quadrature. In order to compute the coefficients x i of, we enforce the equation at a discrete set Q of collocation points t q. Here, we choose p + 1 Gauss-Lobatto nodes, which include t = t 0. This directly enforces the initial condition. Other choices are also possible . Introducing the vectors of series coefficients and of evaluations at quadrature points x(t Q) = {x(t q)} q∈Q, the collocation problem can be solved in matrix form as We approximate the integral as a sum of residual evaluations over Q. Assuming that x = x 0, the integrand at all quadrature points t Q can be computed as a component-wise norm Fitting the input data. For the case when problem admits a unique a solution, we propose a new direct training scheme, δ-SNODE, which is summarized in Algorithm 1. In general, a least-squares approach must be used instead. This entails computing the integral in, which can be done by evaluating the loss function L at quadrature points t q. If the input data is not available at t q, we approximate the integral by evaluating L at the available time points. The corresponding alternating coordinate descent scheme α-SNODE is presented in Algorithm 2. In the next sections, we study the consequences of a low-data scenario on this approach. We use fixed numbers N t and N x of updates for, respectively, θ and x(t). Both are performed with standard routines, such as SGD. In our experiments, we use ADAM to optimize the parameters and an interpolation order p = 14, but any other orders and solvers are possible. Input: M, D from-. ] end for end while Output: θ * Ease of time parallelization. If R(t q) = 0 is enforced explicitly at q ∈ Q, then the ing discrete system can be seen as an implicit time-stepping method of order p. However, while ODE integrators can only be made parallel across the different components of x(t), the assembly of the residual can be done in parallel also across time. This massively increases the parallelization capabilities of the proposed schemes compared to standard training routines. Memory cost. If an ODE admits a regular solution, with regularity r > p, in the sense of Hilbert spaces, i.e., of the number of square-integrable derivatives, then the approximation error of the SNODE converges exponentially with p . Hence, it produces a very compact representation of an ODE-Net. Thanks to this property, p is typically much lower than the equivalent number of time steps of explicit or implicit schemes with a fixed order. This greatly reduces the complexity and the memory requirements of the proposed method, which can be evaluated at any t via by only storing few x i coefficients. Stability and vanishing gradients. The forward Euler method is known to have a small region of convergence. In other words, integrating very fast dynamics requires a very small time step, dt, in order to provide accurate . In particular, for the solver error to be bounded, the eigenvalues of the state Jacobian of the ODE need to lie into the circle of the complex plane centered at (−1, 0) with radius 1/dt . Higher-order explicit methods, such as Runge-Kutta (Runge, 1895), have larger but still limited convergence regions. Our algorithms on the other hand are implicit methods, which have a larger region of convergence than recursive (explicit) methods . We claim that this in a more stable and robust training. This claim is supported by our experiments. Reducing the time step can improve the Euler accuracy but it can still lead to vanishing or exploding gradients . In Appendix C, we show that our methods do not suffer from this problem. Experiments setup and hyperparameters. For all experiments, a common setup was employed and no optimization of hyperparameters was performed. Time horizon T = 10s and batch size of 100 were used. Learning rates were set to 10 −2 for ADAM (for all methods) and 10 −3 for SGD (for α-SNODE). For the α-SNODE method, γ = 3 and 10 iterations were used for the SGD and ADAM algorithms at each epoch, as outlined in Algorithm 2. The initial trajectory was perturbed as x 0 = x 0 + ξ, ξ ∼ U (−0.1, 0.1). This perturbation prevents the exact convergence of Algorithm 1 during initialization, allowing to perform the alternating coordinate descent algorithm. Let us consider the systemη where η, v ∈ R 3 are the states, u = (F x, 0, τ xy) is the control, C(v) is the Coriolis matrix, d(v) is the (linear) damping force, and J(η) encodes the coordinate transformation from the body to the world frame . A gray-box model is built using a neural network for each matrix Each network consists of two layers, the first with a tanh activation. Bias is excluded for f C and f d. For f J, sin(φ) and cos(φ) are used as input features, where φ is the vehicle orientation. When inserted in, these discrete networks produce an ODE-Net that is a surrogate model of the physical system. The trajectories of the system and the learning curves are shown in Appendix A. Comparison of methods in the high-data regime. In the case of δ-SNODE and α-SNODE, only p + 1 points are needed for the accurate integration of the loss function, if such points coincide with the Gauss-Lobatto quadrature points. We found that 100 equally-spaced points produce a comparable . Therefore, the training performance of the novel and traditional training methods were compared by sampling the trajectories at 100 equally-spaced time points. Table 1a shows that δ-SNODE outperforms BKPR-DoPr5 by a factor of 50, while producing a significantly improved generalization. The speedup reduces to 20 for α-SNODE, which however yields a further reduction of the testing MSE by a factor of 10, as can be seen in Figure 1. Comparison in the low-data regime. The performance of the methods was compared using fewer time points, randomly sampled from a uniform distribution. For the baselines, evenly-spaced points were used. Table 1b shows that α-SNODE preserves a good testing MSE, at the price of an increased number of iterations. With only 25% of data, α-SNODE is 10x faster than BKPR-DoPr5. Moreover, its test MSE is 1/7 than BKPR-DoPr5 and up to 1/70 than BKPR-Euler, showing that the adaptive time step of DoPr5 improves significantly the baseline but it is unable to match the accuracty of the proposed methods. The adjoint method produced the same as the backprop (±2%). Consider the multi-agent system consisting of N a kinematic vehicles: where η ∈ R 3Na are the states (planar position and orientation), J(η i) is the coordinate transform from the body to world frame, common to all agents. The agents velocities are determined by known arbitrary control and collision avoidance policies, respectively, K c and K o plus some additional high frequency measurable signal w = w(t), shared by all vehicles. The control laws are non-linear and are described in detail in Appendix B. We wish to learn their kinematics matrix by means of a neural network as in Section 5. The task is simpler here, but the ing ODE has 3N a states, coupled by K 0. We simulate N a = 10 agents in series. Comparison of methods with full and sparse data. The learning curves for high-data regime are in Figure 3. For method α-SNODE, training was terminated when the loss in is less than γL +R, withL = 0.11 andR = 0.01. For the case of 20% data, we setL = 0.01. Table 2 summarizes . δ-SNODE is the fastest method, followed by α-SNODE which is the best performing. Iteration time of BKPR-Euler is 50x slower, with 14x worse test MSE. ADJ-Euler is the slowest but its test MSE is in between BKPR-Euler and our methods. Random down-sampling of the data by 50% and 20% (evenly-spaced for the baselines) makes ADJ-Euler fall back the most. BKPR-DoPr5 failed to find a time step meeting the tolerances, therefore they were increased to rtol= 10 −5, atol= 10 −7. Since the loss continued to increase, training was terminated at 200 epochs. ADJ-DoPr5 failed to compute gradients. Test trajectories are in Figure 2. Additional details are in Appendix B. Robustness of the methods. The use of a high order variable-step method (DoPr5), providing an accurate ODE solution, does not however lead to good training . In particular, the loss function continued to increase over the iterations. On the other hand, despite being nearly 50 times slower than our methods, the fixed-step forward Euler solver was successfully used for learning the dynamics of a 30-state system in the training configuration described in Appendix B. One should however note that, in this configuration, the gains for the collision avoidance policy K o (which couples the ODE) were set to small values. This makes the system simpler and more stable than having a larger gain. As a , if one attempts to train with the test configuration from Appendix B, where the gains are increased and the system is more unstable, then backpropagating trough Euler simply fails. Comparing Figures 3 and 4, it can be seen that the learning curves of our methods are unaffected by the change in the gains, while BKPR-Euler and ADJ-Euler fail to decrease the loss. RNN training pathologies. One of the first RNNs to be trained successfully were LSTMs , due to their particular architecture. Training an arbitrary RNN effectively is generally difficult as standard RNN dynamics can become unstable or chaotic during training and this can cause the gradients to explode and SGD to fail . When RNNs consist of discretised ODEs, then stability of SGD is intrinsically related to the size of the convergence region of the solver . Since higher-order and implicit solvers have larger convergence region , following it can be argued that our method has the potential to mitigate instabilities and hence to make the learning more efficient. This is supported by our . Unrolled architectures. In , an RNN has been used with a stopping criterion, for iterative estimation with adaptive computation time. Highway and residual networks have been studied in as unrolled estimators. In this context, treated residual networks as autonomous discrete-ODEs and investigated their stability. Finally, in a discrete-time non-autonomous ODE based on residual networks has been made explicitly stable and convergent to an input-dependant equilibrium, then used for adaptive computation. Training stable ODEs. In , ODE stability conditions where used to train unrolled recurrent residual networks. Similarly, when using our method on ODE stability can be enforced by projecting the state weight matrices, A, into the Hurwitz stable space: i.e. A ≺ 0. At test time, overall stability will also depend on the solver . Therefore, a high order variable step method (e.g. DoPr5) should be used at test time in order to minimize the approximation error. Dynamics and machine learning. A physics prior on a neural network was used by in the form of a consistency loss with data from a simulation. In , a differentiable physics framework was introduced for point mass planar models with contact dynamics. looked at Partial Differential Equations (PDEs) to analyze neural networks, while ) used Gaussian Processes (GP) to model PDEs. The solution of a linear ODE was used in in conjunction with a structured multi-output GP to model patients outcome of continuous treatment observed at random times. predicted the divergence rate of a chaotic system with RNNs. Test time and cross-validation At test time, since the future outputs are unknown, an explicit integrator is needed. For cross-validation, the loss needs instead to be evaluated on a different dataset. In order to do so, one needs to solve the ODE forward in time. However, since the output data is available during cross-validation, a corresponding polynomial representation of the form can be found and the relaxed loss can be evaluated efficiently. Nonsmooth dynamics. We have assumed that the ODE-Net dynamics has a regularity r > p in order to take advantage of the exponential convergence of spectral methods, i.e., that their approximation error reduces as O(h p), where is h is the size of the window used to discretize the interval. However, this might not be true in general. In these cases, the optimal choice would be to use a hp-spectral approach , where h is reduced locally only near the discontinuities. This is very closely related to adaptive time-stepping for ODE solvers. Topological properties, convergence, and better generalization. There are few theoretical open questions stemming from this work. We argue that one reason for the performance improvement shown by our algorithms is the fact that the set of functions generated by a fixed neural network topology does not posses favorable topological properties for optimization, as discussed in . Therefore, the constraint relaxation proposed in this work may improve the properties of the optimization space. This is similar to interior point methods and can help with accelerating the convergence but also with preventing local minima. One further explanation is the fact that the proposed method does not suffer from vanishing nor exploding gradients, as shown in Appendinx C. Moreover, our approach very closely resembles the MAC scheme, for which theoretical convergence are available . Multiple ODEs: Synchronous vs Asynchronous. The proposed method can be used for an arbitrary cascade of dynamical systems as they can be expressed as a single ODE. When only the final state of one ODE (or its trajectory) is fed into the next block, e.g. as in , the method could be extended by means of 2M smaller optimizations, where M is the number of ODEs. Hidden states. Latent states do not appear in the loss, so training and particularly initializing the polynomial coefficients is more difficult. A hybrid approach is to warm-start the optimizer using few iterations of backpropagation. We plan to investigate a full spectral approach in the future. The model is formulated in a concentrated parameter form . We follow the notation of . Recall the system definition: where η, v ∈ R 3 are the states, namely, the x, and y coordinates in a fixed (world) frame, the vehicle orientation with respect this this frame, φ, and the body-frame velocities, v x, v y, and angular rate, ω. The input is a set of torques in the body-frame, u = (F x, 0, τ xy). The Kinematic matrix is with backpropagation. First three columns: comparison of true trajectories (red) with the prediction from the surrogate (black) at different iterations of the optimization. Last column: loss function at each iteration. δ-SNODE has faster convergence than Euler. The multi-agent simulation consists of N a kinematic vehicles: where η ∈ R 3Na are the states for each vehicle, namely, the x i, y i positions and the orientation φ i of vehicle i in the world frame, while v i ∈ R 2Na are the controls signals, in the form of linear and angular velocities, ν i, ω i. The kinematics matrix is J(η i) = cos(φ i) 0 sin(φ i) 0 0 1. | This paper proposes the use of spectral element methods for fast and accurate training of Neural Ordinary Differential Equations for system identification. | 684 | scitldr |
Exploration in sparse reward reinforcement learning remains an open challenge. Many state-of-the-art methods use intrinsic motivation to complement the sparse extrinsic reward signal, giving the agent more opportunities to receive feedback during exploration. Commonly these signals are added as bonus rewards, which in a mixture policy that neither conducts exploration nor task fulfillment resolutely. In this paper, we instead learn separate intrinsic and extrinsic task policies and schedule between these different drives to accelerate exploration and stabilize learning. Moreover, we introduce a new type of intrinsic reward denoted as successor feature control (SFC), which is general and not task-specific. It takes into account statistics over complete trajectories and thus differs from previous methods that only use local information to evaluate intrinsic motivation. We evaluate our proposed scheduled intrinsic drive (SID) agent using three different environments with pure visual inputs: VizDoom, DeepMind Lab and DeepMind Control Suite. The show a substantially improved exploration efficiency with SFC and the hierarchical usage of the intrinsic drives. A video of our experimental can be found at https://gofile.io/?c=HpEwTd. Reinforcement learning (RL) agents learn on evaluative feedback (reward signals) instead of instructive feedback (ground truth labels), which takes the process of automating the development of intelligent problem-solving agents one step further . With deep networks as powerful function approximators bringing traditional RL into high-dimensional domains, deep reinforcement learning (DRL) has shown great potential (; ;). However, the success of DRL often relies on carefully shaped dense extrinsic reward signals. Although shaping extrinsic rewards can greatly support the agent in finding solutions and shortening the interaction time, designing such dense extrinsic signals often requires substantial domain knowledge, and calculating them typically requires ground truth state information, both of which is hard to obtain in the context of robots acting in the real world. When not carefully designed, the reward shape could sometimes serve as bias or even distractions and could potentially hinder the discovery of optimal solutions. More importantly, learning on dense extrinsic rewards goes backwards on the progress of reducing supervision and could prevent the agent from taking full advantage of the RL framework. In this paper, we consider terminal reward RL settings, where a signal is only given when the final goal is achieved. When learning with only an extrinsic terminal reward indicating the task at hand, intelligent agents are given the opportunity to potentially discover optimal solutions even out of the scope of the well established domain knowledge. However, in many real-world problems defining a task only by a terminal reward means that the learning signal can be extremely sparse. The RL agent would have no clue about what task to accomplish until it receives the terminal reward for the first time by chance. Therefore in those scenarios guided and structured exploration is crucial, which is where intrinsically-motivated exploration has recently gained great success (; b). Most commonly in current state-of-the-art approaches, an intrinsic reward is added as a reward bonus to the extrinsic reward. Maximizing this combined reward signal, however, in a mixture policy that neither acts greedily with regard to extrinsic reward max-imization nor to exploration. Furthermore, the non-stationary nature of the intrinsic signals could potentially lead to unstable learning on the combined reward. In addition, current state-of-the-art methods have been mostly looking at local information calculated out of 1-step lookahead for the estimation of the intrinsic rewards, e.g. one step prediction error , or network distillation error of the next state (b). Although those intrinsic signals can be propagated back to earlier states with temporal difference (TD) learning, it is not clear that this in optimal long-term exploration. We seek to address the aforementioned issues as follows: 1. We propose a hierarchical agent scheduled intrinsic drive (SID) that focuses on one motivation at a time: It learns two separate policies which maximize the extrinsic and intrinsic rewards respectively. A high-level scheduler periodically selects to follow either the extrinsic or the intrinsic policy to gather experiences. Disentangling the two policies allows the agent to faithfully conduct either pure exploration or pure extrinsic task fulfillment. Moreover, scheduling (even within an episode) implicitly increases the behavior policy space exponentially, which drastically differs from previous methods where the behavior policy could only change slowly due to the incremental nature of TD learning. 2. We introduce successor feature control (SFC), a novel intrinsic reward that is based on the concept of successor features. This feature representation characterizes states through the features of all its successor states instead of looking at local information only. This implicitly makes our method temporarily extended, which enables more structured and farsighted exploration that is crucial in exploration-challenging environments. We note that both the proposed intrinsic reward SFC and the hierarchical exploration framework SID are without any task-specific components, and can be incorporated into existing DRL methods with minimal computation overhead. We present experimental in three sets of environments, evaluating our proposed agent in the domains of visual navigation and control from pixels, as well as its capabilities of finding optimal solutions under distraction. Intrinsic Motivation and Auxiliary Tasks Intrinsic motivation can be defined as agents conducting actions purely out of the satisfaction of its internal rewarding system rather than the extrinsic rewards . There exist various forms of intrinsic motivation and they have achieved substantial improvement in guiding exploration for DRL, in tasks where extrinsic signals are sparse or missing altogether. proposed to evaluate curiosity, one of the most widely used kinds of intrinsic motivation, with the 1-step prediction error of the features of the next state made by a forward dynamics model. Their ICM module has been shown to work well in visual domains including first-person view navigation. Since ICM is potentially susceptible to stochastic transitions (a), Burda et al. (2018b) propose as a reward bonus the error of predicting the features of the current state output by a randomly initialized fixed embedding network. The value function is decomposed for extrinsic and intrinsic reward, but different to us a single mixture policy is learned. Another form of curiosity, learning progress or the change in the prediction error, has been connected to count-based exploration via a pseudo-count and has also been used as a reward bonus. propose to train a reachability network, which gives out a reward based on whether the current state is reachable within a certain amount of steps from any state in the current episode. Similar to our proposed SFC, their intrinsic motivation is related to choosing states that could lead to novel trajectories. However, we use two different distance metrics, theirs is explicitly learned to be proportional to the time step differences while ours is based on successor features which measures two states by the difference of the average feature activations of future trajectories. Moreover, their method rewards states with high distance to the states in the current episode while our method rewards states with high distance to the states also from past trajectories, as the successor features are trained from samples of the replay buffer. Auxiliary tasks have been proposed for learning more representative and distinguishable features. add depth prediction and loop closure prediction as auxiliary tasks for learning the features. learn separate policies for maximizing pixel changes (pixel control) and activating units of a specific hidden layer (feature control). However, their proposed UNREAL agent never follows those auxiliary policies as they are only used to learn more suitable features for the main extrinsic task. Hierarchical RL Various HRL approaches have been proposed (a; ; ;). In the context of intrinsic motivation, feature control has been adopted into a hierarchical setting , in which options are constructed for altering given features. However, they report that a flat policy trained on the intrinsic bonus achieves similar performance to the hierarchical agent. Our hierarchical design is perhaps inspired mostly by the work of. Unlike other HRL approaches that try to learn a set of options to construct the optimal policy, their proposed SAC agent aims to learn one flat policy that maximizes the extrinsic reward. While SAC schedules between following the extrinsic task and a set of pre-defined auxiliary tasks such as maximizing touch sensor readings or translation velocity, in this paper we investigate scheduling between the extrinsic task and intrinsic motivation that is general and not task-specific. A concurrent work along this line is presented by. Successor Representation The successor representation (SR) was first introduced to improve generalization in TD learning . While previous works extended SR to the deep setting for better generalized navigation and control algorithms across similar environments and changing goals (b; ;), we focus on its temporarily extended property to accelerate exploration. SR has also been investigated under the options framework. When using SR to measure the intrinsic motivation, the most relevant work to ours is that of. They also design a task-independent intrinsic reward based on SR, however they rely on the concept of count-based exploration and propose a reward bonus, that vastly differs from ours. Their bonus is inverse proportional to the norm of the SR while our formulation rewards change in the SR of two successive states. We will present our proposed method in the next section. We use the RL framework for learning and decision-making under uncertainty. It is formalized by Markov decision processes (MDPs) defined by the tuple S, A, p, r, γ. At time step t the agent samples an action a ∈ A according to policy π(·|s), which depends on its current state s ∈ S. The agent receives a scalar reward r ∈ R and transits to the next state s ∈ S. The distribution of the corresponding state, action and reward process (S t, A t, R t+1) is determined by the distribution of the initial state S 0, the transition operator p and the policy π. The goal of the agent is to find a policy that maximizes the expectation of the sum of discounted rewards T k=0 γ k R t+k+1. We seek to speed up learning in sparse reward RL, where the reward signal is uninformative for almost all transitions. We set the focus on terminal reward scenarios, where the agent only receives a single reward of +1 for successfully accomplishing the task and 0 otherwise. We will first introduce our proposed intrinsic reward successor feature control (SFC) (3.1,3.2), then present our proposed hierachical framework for accelerating intrinsically motivated exploration, which we denote as scheduled intrinsic drive (SID) (Sec.3.3, 3.4). In order to encode long-term statistics into the design of intrinsic rewards for far-sighted exploration, we build on the formulation of successor represention (SR), which introduces a temporally extended view of the states. introduced the idea of representing a state s by the occupancies of all other states from a process starting in s following a fixed policy π, where the occupancies denote the average number of time steps the state process stays in each state per episode. ). The agent starts at the red cross and transitions to an adjacent state at each time step. The goal is to explore the four rooms when no extrinsic reward is provided. In a) each state is annotated by its SD (Eq.3) to the starting state and b) shows for each state the highest possible SFC reward (Eq.4) for a one-step transition from it. Here the successor features are learned using a random walk. c) and d) show a comparison between visitation counts of each state from a random agent and an agent that uses the SFC rewards for control via Q-learning. In the latter case the successor features are learned from scratch via TD. In this environment, the agent receives high rewards for crossing bottleneck states, when the SF are learned beforehand, using a random policy. But even when the SF are learned during exploration, bottleneck states are still visited disproportionately high. Furthermore the intrinsic reward greatly improves exploration compared to a random agent. For implementation details see Appendix. D.4 features (SF) (b;) extend the concept to an arbitrary feature embedding φ: S → R m. For a fixed policy π and embedding φ the SF is defined by the |m|-dimensional vector Analogously, the SF represent the average discounted feature activations, when starting in s and following π. They can be learned by temporal difference (TD) updates SF have several interesting properties which make them appealing as a basis for an intrinsic reward signal: 1) They can be learned even in the absence of extrinsic rewards and without learning a transition model and therefore combine advantages of model-based and model-free RL . 2) They can be learned via computationally efficient TD. 3) They capture the expected feature activations for complete episodes. Therefore they contain information even of spatially and temporarily distant states which might help for effective far-sighted exploration. Given the discussion, we introduce the successor distance (SD) metric that measures the distance between states by the similarity of their SF Fig.1 a) shows an example of the successor distance metric in the tabular case. There the SD roughly correlates to the length of the shortest path between the states. Using this metric to evaluate the intrinsic motivation, one choice could be to use the SD to a fixed anchor state as the intrinsic reward, which depends heavily on the anchor position. Even when a sensible choice for the anchor can be found, e.g. the initial state of an episode, the SDs of distant states from the anchor assimilate. For a pair of states with a fixed spatial distance, their SD is higher when they are located in different rooms and the SD increases substantially when crossing rooms. Therefore the metric might capture the connectivity of the underlying state space. This observation motivates us to define the intrinsic reward successor feature control (SFC) as the squared SD of a pair of consecutive states A high SFC reward indicates a big change in the future feature activations when π is followed. We argue this big change is a strong indicator of bottleneck states, since in bottlenecks a minor change in the action selection can lead to a vastly different trajectory being taken. Fig.1b ) shows that those highly rewarding states under SFC and the true bottlenecks agree, which can be very valuable for exploration . The classical way of adding the intrinsic reward to the extrinsic reward has several drawbacks. First, the final policy is not trained to maximize the actual objective but a mixed version. Second, the intrinsic reward signal is usually changing over time. Including this non-stationary signal in the overall reward can make learning of the actual task unstable. Furthermore, the performance is often extremely sensitive to the scaling of the intrinsic reward relative to the extrinsic and hence it has to be tuned very carefully for every environment. To overcome these issues we propose scheduled intrinsic drive (SID), which learns two separate policies, one for each reward signal. During each episode the scheduler samples several times which of the two policies to follow for the next time steps. Each policy is trained off-policy from all of the transitions irrespective of which policy collected the data. As SID does not add the two reward signals no scaling parameter is needed. A policy is learned that exclusively maximizes extrinsic reward and hence neither the final policy nor the learning process is disturbed by the intrinsic reward. At the same time exploration is ensured as there is experience collected by the policy that learns from the intrinsic reward. Furthermore, scheduling can help exploration as each policy is acted on for an extended time interval, allowing long-term exploration instead of local exploration. Besides that the agent is less susceptible to always go to a nearby small reward instead of looking for other larger rewards that maybe further away. A mixture policy might be attracted to the small reward while with SID the exploration policy is followed for several timesteps which can bring the agent to new states with larger rewards that it did not know of before. We investigated several types of high-level schedulers, however, none of them consistently outperforms a random one. We present possible explanations why a random scheduler already performs well and present them in Appendix F along with the different scheduler choices we tested. Our proposed method can be combined with an any approach that allows off-policy learning. This section describes an instantiation of the SID framework when using Ape-X DQN as a basic offpolicy DRL algorithm with SFC as the intrinsic reward, which we used for all experiments. We depict this algorithm instance in Appendix Figure 8 and more details are provided in Appendix C. The algorithm is composed of: • A Q-Net {θ ϕ, θ E, θ I}: Contains a shared embedding θ ϕ and two Q-value output heads θ E (extrinsic) and θ I (intrinsic). • A SF-Net {θ φ, θ ψ}: Contains an embedding θ φ and a successor feature head θ ψ. θ φ is initialized randomly and kept fixed during training. The output of SF-Net is used to calculate the SFC intrinsic reward (Eq.4). The SF-net is trained with the samples of the replay buffer, which contains the experience generated by the behavior policy. • A high-level scheduler: Instantiated in each actor, selects which policy to follow (extrinsic or intrinsic) after a fixed number of environment steps (max episode length/M). The scheduler randomly picks one of the tasks with equal probability. • N parallel actors (N = 8): Each actor instantiates its own copy of the environment, periodically copies the latest model from the learner. We learn from K-step targets (K = 5), so each actor at each environment step stores (s t−K, a t−K, K k=1 γ k−1 r t−K+k, s t) into a shared replay buffer. Each actor will act according to either the extrinsic or the intrinsic policy based on the current task selected by its scheduler. • A learner: Learns the Q-Net (θ E and θ I are learned with the extrinsic and intrinsic reward respectively) and the SF-Net from samples (Eq.2) from the same shared replay buffer, which contains all experiences collected from following different policies. We evaluate our proposed intrinsic reward SFC and the hierarchical framework of intrinsic motivation SID in three sets of simulated environments: VizDoom , DeepMind Lab and DeepMind Control Suite . Throughout all experiments, agents receive as input only raw pixels with no additional domain knowledge or task specific information. We mainly compare the following agent configurations: M: Ape-X DQN with 8 actors, train with only the extrinsic main task reward; ICM: train a single policy with the ICM reward bonus ; RND: train a single policy with the RND reward bonus (b); Ours: with our proposed SID framework, schedule between following the extrinsic main task policy and the intrinsic policy trained with our proposed SFC reward. We carried out an ablation study, where we compare the performance of an agent with intrinsic and extrinsic reward summed up, to the corresponding SID agent for each intrinsic reward type (ICM, RND, SFC). We present the plots and discussions in Section 4.4 Appendix A. For the intrinsic reward normalization and the scaling for the extrinsic and intrinsic rewards we do a parameter sweep for each environment (Appendix C.4) and choose the best setting for each agent. We notice that our scheduling agent is much less sensitive to different scalings than agents with added reward bonus. Since our proposed SID setup requires an off-policy algorithm to learn from experiences generated by following different policies, we implement all the agents under the Ape-X DQN framework. After a parameter sweep we set the number of scheduled tasks per episode to M = 8 for our agent in all experiments, meaning each episode is divided into up to 8 sub-episodes, and for each of which either the extrinsic or the intrinsic policy is sampled as the behavior policy. Appendix C and D contain additional information about experimental setups and model training details. We start by verifying our implementation of the baseline algorithms in "DoomMyWayHome" which was previously used in several state-of-the-art intrinsic motivation papers . The agent needs to navigate based only on first-person view visual inputs through 8 rooms connected by corridors (Fig.2a), each with a distinct texture (Fig.2c). The experimental are shown in Fig.3 (left). Since our basic RL algorithm is doing off-policy learning, it has relatively decent random exploration capabilities. We see that the M agent is able to solve the task sometimes without any intrinsically generated motivations, but that all intrinsic motivation types help to solve the task more reliably and speed up the learning. Our method solve the task the fastest, but also ICM and RND learn to reach the goal reliably and efficiently. We wanted to test the agents on a more difficult VizDoom map where structured exploration would be of vital importance. We thus designed a new map which scales up the navigation task of MyWayHome. Inspired by how flytraps catch insects, we design the layout of the rooms in a geometrically challenging way that escaping from one room to the next with random actions is extremely unlikely. We show the layout of MyWayHome (Fig.2a) and FlytrapEscape (Fig.2b) with the same downscaling ratio. The maze consists of 4 rooms separated by V-shaped walls pointing inwards the rooms. The small exits of each room is located at the junction of the V-shape, which is extremely difficult to maneuver into without a sequence of precise movements. As in the MyWayHome task, in each episode, the agent starts from the red dot shown in Fig.2b with a random orientation. An episode terminates if the final goal is reached and the agent will receive a reward of +1, or if a maximum episode steps of 10,000 (2100 for MyWayHome) is reached. The task is to escape the fourth room. The experimental on FlytrapEscape are shown in Fig.3 (right). Neither M nor RND manages to learn any useful policies. ICM solves the task in sometimes, while we can clearly observe that our method efficiently explores the map and reliably learns how to navigate to the goal. We visualize the learned successor features in Appendix E and its evolution over time is shown in the video https://gofile.io/?c=HpEwTd. In the second experiment, we set out to evaluate if the agents would be able to reliably collect the faraway big reward in the presence of small nearby distractive rewards. For this experiment we use the 3D visual navigation simulator of DeepMind Lab . We constructed a challenging level "AppleDistractions" (Fig.10b) with a maximum episode length of 1350. In this level, the agent starts in the middle of the map (blue square) and can follow either of the two corridors. Each corridor has multiple sections and each section consists of two dead-ends and an entry to next section. Each section has different randomly generated floor and wall textures. One of the corridors (left) gives a small reward of 0.05 for each apple collected, while the other one (right) contains a single big reward of 1 at the end of its last section. The optimal policy would be to go for the single faraway big reward. But since the small apple rewards are much closer to the spawning location of the agent, the challenge here is to still explore other areas sufficiently often so that the optimal solution could be recovered. The are presented in Fig.4 (left). Ours received on average the highest rewards and is the only method that learns to navigate to the large reward in every run. The baseline methods get easily distracted by the small short-term rewards and do not reliably learn to navigate away from the distractions. With a separate policy for intrinsic motivation the agent can for some time interval completely "forget" about the extrinsic reward and purely explore, since it does not get distracted by the easily reachable apple rewards and can efficiently learn to explore the whole map. In the meanwhile the extrinsic policy can simultaneously learn from the new experiences and might learn about the final goal discovered by the exploration policy. This highlights a big advantage of scheduling over bonus rewards, that it reduces the probability of converging to bad local optimums. In Section 4.4 we further showed that SID is generally applicable and also helps ICM and RND in this task. To show that our methods can be used in domains other than first-person visual navigation, we evaluate on the classic control task "cartpole: swingup sparse" , using third-person view images as inputs (Fig.11). The pole starts pointing down and the agent receives a single terminal reward of +1 for swinging up the unactuated pole using only horizontal forces on the cart. Additional details are presented in Appendix D.3. The are shown in Fig.4 (right). Compared to the previous tasks, this task is easy enough to be solved without intrinsic motivation, but we can see also that all intrinsic motivation methods significantly reduce the interaction time. Ours still outperforms other agents even in the absence of clear bottlenecks which shows its general applicability, but since the task is relatively less challenging for exploration, the performance gain is not as substantial as the previous experiments. Further, we conducted an ablation study on AppleDistractions. We denote with "M+SFC", "M+RND", "M+ICM" the agents with one policy where the respective intrinsic reward is added to the extrinsic one. With "SID(M,SFC)", "SID(M,RND)", "SID(M,ICM)" the agents are named that have two policies, one for the respective intrinsic and one for the extrinsic reward and use SID to schedule between them. We note that the "SID(M,ICM)" agent corresponds to the "Ours" agent from the previous experiments. The are presented in Figure 5. Our SID(M, SFC) agent received on average the highest rewards. Furthermore, we see that scheduling helped both ICM and SFC to find the goal and not settle for the small rewards, and SID also helps improve the performance of RND. The respective reward bonus counterparts of the three SID agents were more attracted to the small nearby rewards. This behavior is expected: By scheduling, the intrinsic policy of the SID agent is assigned with its own interaction time with the environment, during which it could completely "forget" about the extrinsic rewards. The agent then has a much higher probability of discovering the faraway big reward, thus escaping the distractions of the nearby small rewards. Once the intrinsic policy collects these experiences of the big reward, the extrinsic policy can immediately learn from those since both policies share the same replay buffer. Ablations for the other environments are reported in Appendix A. In this paper, we investigate an alternative way of utilizing intrinsic motivation for exploration in DRL. We propose a hierarchical agent SID that schedules between following extrinsic and intrinsic drives. Moreover, we propose a new type of intrinsic reward SFC that is general and evaluates the intrinsic motivation based on longer time horizons. We conduct experiments in three sets of environments and show that both our contributions SID and SFC help greatly in improving exploration efficiency. We consider many possible research directions that could stem from this work, including designing more efficient scheduling strategies, incorporating several intrinsic drives (that are possibly orthogonal and complementary) instead of only one into SID, testing our framework in other control domains such as manipulation, combining the successor representation with learned feature representations and extending our evaluation onto real robotics systems. We have conducted ablation studies for all the three sets of environments to investigate the influence of scheduling on our proposed method, whether other reward types can benefit from scheduling too, and whether environment specific differences exist. We compare the performance of the following agent configurations: • Three reward bonus agents M+ICM, M+RND, M+SFC: The agent receives the intrinsic reward of ICM , RND (b) or our proposed SFC respectively as added bonus to the extrinsic main task reward and trains a mixture policy on this combined reward signal. We note that the M+ICM and M+RND agent in this section corresponds to the ICM and RND agent in all other sections respectively. The agent schedules between following the extrinsic main task policy and the intrinsic policy trained with the ICM, RND or our proposed SFC reward respectively. We note that the SID (M, SFC) agent in this section corresponds to the Ours agent in all other sections. The on AppleDistractions were shown in in Section 4.4 the main paper. In Fig.6 (left), we present the ablation study for FlytrapEscape. The agents with the ICM component perform poorly. Only 1 run of M+ICM learned to navigate to the goal, while the scheduling agent SID(M,ICM) did not solve the task even once. But for the two SFC agents, the scheduling greatly improves the performance. Although the reward bonus agent M+SFC was not successful in every run, the SID(M,SFC) agent solved the FlytrapEscape in 5 out of 5 runs. We hypothesize the reason for the superior performance of SID(M,SFC) compared to M+SFC could be the following: Before seeing the final goal for the first time, the M+SFC agent is essentially learning purely on the SFC reward, which is equivalent to the intrinsic policy of the scheduling SID(M,SFC) agent. Since SFC might preferably go to bottleneck states as the difference between the SF of the two neighboring states are expected to be relatively larger for those states. Since the extrinsic policy is doing random exploration before receiving any reward signal, it could be a good candidate to explore the next new room from the current bottleneck state onwards. Then the SFs of the new room will be learned when it is being explored, which would then guide the agent to the next bottleneck regions. Thus the SID(M,SFC) agent could efficiently explore from bottleneck to bottleneck, while the M+SFC agent could not be able to benefit from the two different behaviors under the extrinsic and intrinsic rewards and could oscillate around bottleneck states. On the other hand, scheduling did not help ICM or RND. A reason could be that ICM or RND is not especially attracted by bottleneck states so it does not help exploration if the agent spends half of the time acting randomly as the extrinsic policy had no reward yet to learn from. Also since the FlytrapEscape environment is extremely explorationchallenging, the temporally extended view of our proposed SFC might of vital importance to guide efficient exploration. In Fig.6 (right), we present the ablation study for Cartpole. We can observe that SID helps to improve the performance of both ICM and RND. As for SFC, although the reward bonus agent learns a bit faster than the SID agent, we note that actually all the three SID agent converge to more stable policies, while the reward bonus agents tend to oscillate around the optimal return. In experiments of the main paper with SID the scheduler chooses 8 times per episode which policy to follow until a new policy is chosen. We conducted a further experiment to examine how different numbers of switches per episode affects the performance of the agent. We carried out the experiment on "DoomMyWayHome" with an agent using SID with SFC for the intrinsic reward. The are show in 7 C APPENDIX: IMPLEMENTATION DETAILS This section describes implementation details and design choices. The backbone of our algorithm implementation is presented in Section 3.4 and visualized in Fig. 8. Since our algorithm requires an off-policy learning strategy, and in consideration for faster learning and less computation overhead, we use the state-of-the-art off-policy algorithm with the K-step target (K = 5) for bootstraping without off-policy correction where θ − denotes the target network parameters. We chose the number of actors the be the highest the hardware supported, which was 8. To adapt the settings from the 360 actors in the Ape-X DQN to our setting of N = 8 actors, we set a fixed i for each actor i ∈ {1, . . ., 8} as i = where α = 7 and = 0.4 are set as in the original work. For computational efficiency, we implement our own version of the prioritized experience replay. We split the replay buffer into two, with size of 40, 000 and 10, 000. Every transition is pushed to the first one, while in the second one only transitions are pushed on which a very large TD-error is computed. We store a running estimate of the mean and the standard deviation of the TD-errors and if for a transition the error is larger than the mean plus two times the standard deviation, the transition is pushed. In the learner a batch of size 128 consists of 96 transitions drawn from the normal replay buffer and 32 are drawn from the one that stores transition with high TD-error, which as a have relatively seen a higher chance of being picked. We note that previous works for learning the deep SF have included an auxiliary task of reconstruction on the features φ Kulkarni et al. (2016a); , while in this work we investigate learning ψ without this extra reconstruction stream. Instead of adapting the features φ while learning the successor features ψ, we fix the randomly initialized φ. This design follows the intuition that since SF (ψ) estimates the expectation of features (φ) under the transition dynamics and the policy being followed, more stable learning of the SF could be achieved if the features are kept fixed. The SF are learned from the same replay buffer as for training the Q-Net. Since our base algorithm is K-step Ape-X, and we follow the memory efficient implementation of the replay buffer as suggested in Ape-X, we only have access to K-step experience tuples (K = 5) for learning the SF. Therefore we calculate the intrinsic reward by applying the canonical extension of the SFC reward formulation (Eq.4) to K-step transitions The behaviour policy π associated with the SF is not given explicitly, but since the SF are learned from the replay buffer via TD learning, it is a mixture of current and past behaviour policies from all actors. Most network parameters are shared for estimating the expected discounted return of the intrinsic and extrinsic rewards. The scale of the rewards has a big influence on the scale of the gradients for the network parameters. Hence, it is important that the rewards are roughly on the same scale, otherwise effectively different learning rates are applied. The loss of the network comes from the regression on the Q-values, which approximate the expected return. So our normalization method aims to bring the discounted return of both tasks into the same range. To do so we first normalize the intrinsic rewards by dividing them by a running estimate of their standard deviation. We also keep a running estimate of the mean of this normalized reward and denote it r I. Since every time step an intrinsic reward is received we estimate the discounted return via the geometric series. We scale the extrinsic task reward that is always in {0, 1} with η, where γ I is the discount rate for the intrinsic reward. Furthermore, η is a hyperparameter which takes into account that for Q-values from states more distant to the goal the reward is discounted with the discount rate for the extrinsic reward depending on how far away that state is. In our experiments we set η = 3. We did the same search for hyperparameters and normalization technique for all algorithms that include an intrinsic reward and found out that the procedure above works best for all of them. The algorithms were evaluated on the FlytrapEscape. For η we tried the values in {0.3, 1, 3, 10}. We also tried to not normalize the rewards and just scale the intrinsic reward. To scale the intrinsic reward we tried the values {0.001, 0.01, 0.1, 1}. However, we found that as the scale of the intrinsic rewards is not the same over the whole training process this approach does not work well. We also tried to normalize the intrinsic rewards by dividing it by a running estimate of its standard deviation and then scale this quantity with a value in {0.01, 0.1, 1}. We use the same model architecture as depicted in Fig. 9 across all 3 sets of experiments. ReLU activations are added after every layer except for the last layers of each dashed blocks in the above figure. For the experiments with the ICM , we added BatchNorm before activation for the embedding of the ICM module following the original code released by the authors. Code is implemented in pytorch . For all experiments we used a stack of 4 consecutive, preprocessed observations as states. For the first-person view experiments in VizDoom and DeepMind Lab, we use an action repetition of 4, while for the classic control experiment we did not apply action repetition. In the text, we only refer to the actual environment steps (e.g. before divided by 4). The VizDoom environment produces 320 × 240 RGB images as observations. In a preprocessing step, we downscaled the images to 84 × 84 pixels and converted them to grayscale. For FlytrapEscape, we adopted the action space settings from the MyWayHome task. The action space was given by the following 5 actions: TURN LEFT, TURN RIGHT, MOVE FORWARD, MOVE LEFT, MOVE RIGHT We setup the DmLab environment to produce 84 × 84 RGB images as observations. In Fig.10 we show examplary observations of AppleDistractions. We preprocessed the images by converting the observations to grayscale. For a given enviroment seed, textures for each segment of the maze are generated at random. We used the predefined DmLab actions from. The action space was given by the following 8 actions (no shooting setting): Forward, Backward, Strafe Left, Strafe Right, Look Left, Look Right, Forward+Look Left, Forward+Look Right. We conducted the experiments for the classic control task on the'Cart-pole' domain with the'swingup sparse' task provided by the DeepMind Control Suite. Since our agents needs a discrete action space and the control suite only provides continuous action spaces, we discretized the single action dimension. The set of actions was {-0.5, -0.25, 0, 0.25, 0.5}. We configured the environment to produce 84×84 RGB pixel-only observations from the 1st camera, which is the only predefined camera that shows the full cart and pole at all times. We further convert the images to grey-scale and stack four consecutive frames as input to our network. The episode length was 200 environment steps. In the four-room domain, the agent can transition to directly connected states using the four actions'up','down','left' and'right'. For Fig.1 a),b) the successor features were calculated analytically via the formula Ψ = (I − γP) − 1, where P denotes the one-step transition matrix. The SF discount factor was set to γ = 0.95. For Fig.1 c),d) the agents performed 10000 episodes with 30 steps each. These short episodes ensure that exploration remains challenging, even in a relatively small environment. In d), the learning rate for the SF as well as the Q-table was set to 0.05. To prevent optimistic initialization effects the Q-table was initialized to 0. To generate our we used two machines that run Ubuntu 16.04. Each machine has 4 GeForce Titan X (Pascal) GPUs. On one machine we run 4 experiments in parallel, each experiment on a separate GPU. E APPENDIX: SUCCESSOR DISTANCE VISUALIZATION Figure 12: Projection of the SFs. For the purpose of visualization we discretized the map into 85 × 330 grids and position the trained agent SID(M,SFC) at each grid, then computed the successor features ψ for that location for each of the 4 orientations (0°, 90°, 180°, 270°), which ed in a 4 × 512 matrix. We then calculated the l2-difference of this matrix with a 4 × 512 vector containing the successor features of the starting position with the 4 different orientations. Shown in log-scale. As an additional evaluation, we visualize the SF of an Ours agent (i.e. trained with SID which schedules between the extrinsic policy and the SFC policy) at the end of the training (Fig.12). That means the SF are trained with the experiences from the behaviour policy of our agent. We can see that the SD from each coordinate to the starting position tends to grow as the geometric distance increases, especially for those that locate on the pathways leading to later rooms. This shows that the learned SD and the geometric distance are in good agreement and that the SF are learned as expected. Furthermore, we observe big intensity changes around the bottlenecks (the room entries) in the heatmap, which also supports the hypothesis that SFC leads the agent to bottleneck states. We believe this is the first time that SF are shown to behave in a first-person view environment as one would expect from its definition. The evolution of the SF over time is shown in the video https://gofile.io/?c=HpEwTd. When learning optimal value functions or optimal policies via TD or policy gradient with deep function approximators, optimizing with algorithms such as gradient descent means that the policy would only evolve incrementally: It is necessary that the TD-target values do not change drastically over a short period of time in order for the gradient updates to be meaningful. The common practice of utilizing a target network in off-policy DRL stabilizes the update but in the meanwhile making the policy adapt even more incrementally over each step. But intrinsically motivated exploration, or exploration in general, might benefit from an opposite treatment of the policy update. This is because the intrinsic reward is non-stationary by nature, as well as the fact that the exploration policy should reflect the optimal strategy corresponding to the current stage of learning, and thus is also non-stationary. With the commonly adopted way of using intrinsic reward as a bonus to the extrinsic reward and train a mixture policy on top, exploration would be a balancing act between the incrementally updated target values for stable learning and the dynamically adapted intrinsic signals for efficient exploration. Moreover, neither the extrinsic nor the intrinsic signal is followed for an extended amout of time. Therefore, we propose to address this issue with a hierarchical approach that by design has slowly changing target values while still allowing drastic behavior changes. The idea is to learn not a single, but multiple policies, with each one optimizing on a different reward function. To be more specific, we assume to have N tasks T ∈ T (e.g. N = 2 and T = {T E, T I} where T E denotes the extrinsic task and T I the intrinsic task) defined by N reward functions (e.g. R E and R I) that share the state and action space. The optimal policy for each of these N different MDPs can be learned with arbitrary off-policy DRL algorithms. During each episode, a high-level scheduler periodically selects a policy for the agent to follow to gather experiences, and each policy is trained with all experiences collected following those N different policies. The overall learning objective is to maximize the extrinsic reward E ω(T|St) E π T (At|St) [q T E (S t, A t |A t ∼ π T (·|S t))] (ω: the macro-policy of the scheduler). By allowing the agent to follow one motivation at a time, it is possible to have a pool of N different behavior policies without creating unstable targets for off-policy learning. By scheduling M times even during an episode, we implicitly increase the behavior policy space by exponential to N M for a single episode. Moreover, disentangling the extrinsic and intrinsic policy strictly separates stationary and non-stationary behaviors, and the different sub-objectives would each be allocated with its own interaction time, such that extrinsic reward maximization and exploration do not distract each other. We investigated several types of high-level schedulers, however, none of them consistently outperforms a random one. We suspect the reason why a random scheduler already performs very well under the SID framework, is that a highly stochastic schedule can be beneficial to make full use of the big behavior policy space. We investigated three types of high-level schedulers: • Random scheduler: Sample a task from uniform distribution every task steps. • Switching scheduler: Sequentially switches between extrinsic and intrinsic task. • Macro-Q Scheduler: Learn a scheduler that learns with macro actions and from subsampled experience tuples. In each actor, we keep an additional local buffer that stores N + 1 subsampled experiences: {s t−N m, . . ., s t−2m, s t−m, s t}. Then at each environment step, Besides the K-step experience tuple mentioned above, we also store an additional macro-transition {s t−N m, s t} along with its sum of discounted rewards to the shared replay buffer. This macro-transition is paired with the current task as its macro-action. The Macro-Q Scheduler is then learned with an additional output head attached to θ ϕ (we also tried θ φ). • Threshold-Q Scheduler: Selects task according to the Q-value output of the extrinsic task head. For this scheduler no additional learning is needed. It just selects a task based on the current Q-value of the extrinsic head θ e. We tried the following selection strategies: -Running mean: select intrinsic when the current Q-value of the extrinsic head is below its running mean, extrinsic otherwise -Heuristic median: observing that the running mean of the Q-values might not be a good statistics for selecting tasks due to the very unevenly distributed Q-values across the map, we choose a fixed value that is around the median of the Q-values (0.007), and choose intrinsic when below, extrinsic otherwise As we report in the paper, none of the above scheduler choices consistently performs better across all environments than a random scheduler. We leave this part to future work. G APPENDIX: SINGLE RUNS | A new intrinsic reward signal based on successor features and a novel way to combine extrinsic and intrinsic reward. | 685 | scitldr |
Meta-Reinforcement learning approaches aim to develop learning procedures that can adapt quickly to a distribution of tasks with the help of a few examples. Developing efficient exploration strategies capable of finding the most useful samples becomes critical in such settings. Existing approaches to finding efficient exploration strategies add auxiliary objectives to promote exploration by the pre-update policy, however, this makes the adaptation using a few gradient steps difficult as the pre-update (exploration) and post-update (exploitation) policies are quite different. Instead, we propose to explicitly model a separate exploration policy for the task distribution. Having two different policies gives more flexibility in training the exploration policy and also makes adaptation to any specific task easier. We show that using self-supervised or supervised learning objectives for adaptation stabilizes the training process and also demonstrate the superior performance of our model compared to prior works in this domain. Reinforcement learning (RL) approaches have seen many successes in recent years, from mastering the complex game of Go BID10 to even discovering molecules BID8. However, a common limitation of these methods is their propensity to overfitting on a single task and inability to adapt to even slightly perturbed configuration BID12. On the other hand, humans have this astonishing ability to learn new tasks in a matter of minutes by using their prior knowledge and understanding of the underlying task mechanics. Drawing inspiration from human behaviors, researchers have proposed to incorporate multiple inductive biases and heuristics to help the models learn quickly and generalize to unseen scenarios. However, despite a lot of effort it has been difficult to approach human levels of data efficiency and generalization. Meta-RL tries to address these shortcomings by learning these inductive biases and heuristics from the data itself. These inductive biases or heuristics can be induced in the model in various ways like optimization, policy initialization, loss function, exploration strategies, etc. Recently, a class of policy initialization based meta-learning approaches have gained attention like Model Agnostic MetaLearning (MAML) BID1. MAML finds a good initialization for a policy that can be adapted to a new task by fine-tuning with policy gradient updates from a few samples of that task. Given the objective of meta RL algorithms to adapt to a new task from a few examples, efficient exploration strategies are crucial for quickly finding the optimal policy in a new environment. Some recent works BID3 have tried to address this problem by using latent variables to model the distribution of exploration behaviors. Another set of approaches BID11 BID9 focus on improving the credit assignment of the meta learning objective to the pre-update trajectory distribution. However, all these prior works use one or few policy gradient updates to transition from preto post-update policy. This limits the applicability of these methods to cases where the post-update (exploitation) policy is similar to the pre-update (exploration) policy and can be obtained with only a few updates. Also, for cases where pre-and post-update policies are expected to exhibit different behaviors, large gradient updates may in training instabilities and lack of convergence. To address this problem, we propose to explicitly model a separate exploration policy for the distribution of tasks. The exploration policy is trained to find trajectories that can lead to fast adaptation of the exploitation policy on the given task. This formulation provides much more flexibility in training the exploration policy. In the process, we also establish that, in order to adapt as quickly as possible to the new task, it is often more useful to use self-supervised or supervised learning approaches, where possible, to get more effective updates. Unlike RL which tries to find an optimal policy for a single task, meta-RL aims to learn a policy that can generalize to a distribution of tasks. Each task T sampled from the distribution ρ(T) corresponds to a different Markov Decision Process (MDP). These MDPs have similar state and action space but might differ in the reward function or the environment dynamics. The goal of meta RL is to quickly adapt the policy to any task T ∼ ρ(T) with the help of few examples from that task. BID1 introduced MAML -a gradient-based meta-RL algorithm that tries to find a good initialization for a policy which can be adapted to any task T ∼ ρ(T) by fine-tuning with one or more gradient updates using the sampled trajectories of that task. MAML maximizes the following objective function: where U is the update function that performs one policy gradient ascent step to maximize the expected reward R(τ) obtained on the trajectories τ sampled from task T. BID9 showed that the gradient of the objective function J(θ) can be written as: DISPLAYFORM0 where, ∇ θ J post (τ, τ) optimizes θ to increase the likelihood of the trajectories τ that lead to higher returns given some trajectories τ. In other words, this term does not optimize θ to yield trajectories τ that lead to good adaptation steps. That is infact, done by the second term ∇ θ J pre (τ, τ). It optimizes for the pre-update trajectory distribution, P T (τ |θ), i.e, increases the likelihood of trajectories τ that lead to good adaptation steps. During optimization, MAML only considers J post (τ, τ) and ignores J pre (τ, τ). Thus MAML finds a policy that adapts quickly to a task given relevant experiences, however, the policy is not optimized to gather useful experiences from the environment that can lead to fast adaptation. ProMP BID9 analyzes this issue with MAML and incorporates ∇ θ J pre (τ, τ) term in the update as well. They propose to use DICE BID2 to allow causal credit assignment on the pre-update trajectory distribution, however, the gradients computed by DICE suffer from high variance estimates. To remedy this, they proposed a low variance (and slightly biased) approximation of the DICE based loss that leads to stable updates. The pre-update and post-update policies are often expected to exhibit very different behaviors, i.e, exploration and exploitation behaviors respectively. In such cases, transitioning a single policy from pure exploration phase to pure exploitation phase via policy gradient updates will require multiple steps. Unfortunately, this significantly increases the computational and memory complexities of the algorithm. Furthermore, it may not even be possible to achieve this transition via few gradient updates. This raises an important question: DO WE REALLY NEED TO USE THE PRE- Using separate policies for pre-update and post-update sampling: The straightforward solution to the above problem is to use a separate exploration policy µ φ responsible for collecting trajectories for the inner loop updates to get θ. Following that, the post-update policy π θ can be used to collect trajectories for performing the outer loop updates. Unfortunately, this is not as simple as it sounds. To understand this, let's look at the inner loop updates: DISPLAYFORM0 When using the exploration policy for sampling, we need to perform importance sampling. The update thus becomes: DISPLAYFORM1 where P T (τ |θ) and Q T (τ |φ) represent the trajectory distribution sampled by π θ and µ φ respectively. Note that the above update is an off-policy update which in high variance estimates when the two trajectory distributions are quite different from each other. This makes it infeasible to use the importance sampling update in the current form. In fact, this is a more general problem that arises even in the on-policy regime. The policy gradient updates in the inner loop in both ∇ θ J pre and ∇ θ J post terms being high variance. This stems from the mis-alignment of the outer gradients (∇ θ J outer) and the inner gradient,hessian Using a self-supervised/supervised objective for the inner loop update step: The instability in the inner loop updates arises due to the high variance nature of the policy gradient update. Note that the objective of inner loop update is to provide some task specific information to the agent with the help of which it can adapt its behavior in the new environment. We believe that this could be achieved using some form of self-supervised or supervised learning objective in place of policy gradient in the inner loop to ensure that the updates are more stable. We propose to use a network for predicting some task (or MDP) specific property like reward function, expected return or value. During the inner loop update, the network updates its parameters by minimizing its prediction error on the given task. Unlike prior meta-RL works where the task adaptation in the inner loop is done by policy gradient updates, here, we update some parameters shared with the exploitation policy using a supervised loss objective function ing in stability during the adaptation phase. However, note that the variance and usefulness of the update depends heavily on the choice of the self-supervision/supervision objective. We delve into this in more detail in the appendix. DISPLAYFORM2 Our proposed model comprises of three modules, the exploration policy µ φ (s), the exploitation policy π θ,z (s), and the self-supervision network M β,z (s, a). Note that M β,z and π θ,z share a set of parameters z while containing their own set of parameters β and θ respectively. The agent first collects a set of trajectories τ using its exploration policy µ φ for each task T ∼ ρ(T). It then updates the shared parameter z by minimizing the regression DISPLAYFORM0 where, M (s, a) is the target which can be any of the task specific quantities as mentioned above. We further modify the above equation by multiplying the DICE operator to simplify the gradient computation w.r.t φ. This eliminates the need to apply the policy gradient trick to expand the above expression for gradient computation. The update then becomes: DISPLAYFORM1 where ⊥ is the stop gradient operator. After obtaining the updated parameters z, the agent samples trajectories τ using its updated exploitation policy π θ,z. Note that our model enables the agent to learn a generic exploitation policy π θ,z for the task distribution which can then be adapted to any specific task by updating z to z as shown above. Effectively, z encodes the necessary information regarding the task that helps an agent in adapting its behavior to maximize its expected return. The collected trajectories are then used to perform a policy gradient update for all the parameters z, θ, φ and β using the following objective: DISPLAYFORM2 The gradients of J(z, θ) w.r.t. φ are shown in Eq. 6 (see appendix). Although the gradients are unbiased, they still have very high variance. To solve this problem, we draw inspiration from BID7 and replace the return R µ t (see Eq. 7 in appendix) with an advantage estimate A µ t (see 8 in appendix). Due to space constraints we describe these formulations in more detail in appendix. We have evaluated our proposed model on the environments used by BID9. Specifically, we have used HalfCheetahFwdBack, HalfCheetahVel and Walker2DFwdBack environments for the dense reward tasks and a 2D point environment proposed in BID9 for sparse rewards. The details of the network architecture and the hyperparameters used for learning have been mentioned in the appendix. We would like to state that we have not performed much hyperparameter tuning due to computational constraints and we expect the of our method to show further improvements with further tuning. Also, we restrict ourselves to a single adaptation step in all environments for the baselines as well as our method, but it can be easily extended to multiple gradient steps as well by conditioning the exploration policy on z. The of the baselines for the benchmark environments have been borrowed directly from the the official ProMP website 1. For the point environments, we have used the publicly available official implementation 2. We also compare our method with 3 baseline approaches: MAML, EMAML and ProMP on the benchmark continuous control tasks. The performance plots for all four algorithms are shown in FIG0. In all the environments, our proposed method outperforms others in terms of asymptotic performance. The training is also more stable for our method and leads to lower variance plots. Our algorithm particularly shines in 2DPointEnvCorner FIG1 where the reward is sparse. In this environment, the agent needs to perform efficient exploration and use the sparse reward trajectories to perform stable updates both of which are salient aspects of our algorithm. Although ProMP manages to reach similar peak performance to our method in 2DPointEnvCorner, the training itself is pretty unstable indicating the inherent fragility of their updates. Further, we show that our method leads to good exploration behavior in a sparse reward point environment where the agent is allowed to sample only two trajectories in order to perform the updates illustrating the strength of our procedure. We also show that the separation of exploration and exploitation policies in this scenario allows us to train the exploration policy using an independent objective providing better performance in certain situations. Ablations: We perform several ablation experiments to analyze the impact of different components of our algorithm on 2D point navigation task. FIG1 shows the performance plots for 5 different variants: VPG-Inner loop: The supervised loss in the inner loop is replaced with the vanilla policy gradient loss as in MAML while using the exploration policy to sample the pre-update trajectories. As expected, this model performs poorly due to the high variance off-policy updates in the inner loop. Reward Self-Supervision: A reward based self-supervised objective is used instead of return based self-supervision. This variant performs reasonably well but struggles to reach peak performance since the task is sparse reward. Vanilla DICE: We directly use the dice gradients to perform updates on φ instead of using the low variance gradient estimator. The high variance dice gradients lead to unstable training as can be seen from the plots. E-MAML Based: Used an E-MAML BID11 type objective to compute the gradients w.r.t φ instead of using DICE. Although this variant manages to reach peak performance, it is unstable due to the lack of causal credit assignment. Ours: Used the low variance estimate of the dice gradients to compute updates for φ along with return based self-supervision for inner loop updates. Our model reaches peak performance and exhibits stable training due to low variance updates. Unlike conventional meta-RL approaches, we proposed to explicitly model a separate exploration policy for the task distribution. Having two different policies gives more flexibility in training the exploration policy and also makes adaptation to any specific task easier. Hence, as future work, we would like to explore the use of separate exploration and exploitation policies in other meta-learning approaches as well. We showed that, through various experiments on both sparse and dense reward tasks, our model outperforms previous works while also being very stable during training. This validates that using self-supervised techniques increases the stability of these updates thus allowing us to use a separate exploration policy to collect the initial trajectories. Further, we also show that the variance reduction techniques used in the objective of exploration policy also have a huge impact on the performance. However, we would like to note that the idea of using a separate exploration and exploitation policy is much more general and doesn't need to be restricted to MAML. that to compute M β,z (s t, a t) = w T β m β (s t, a t). Using the successor representations can effectively be seen as using a more accurate/powerful baseline than directly predicting the N-step returns using the (s t, a t)pair. We perform some additional experiments on another toy environment to illustrate the exploration behavior shown by our model and demonstrate the benefits of using different exploration and exploitation policies. FIG2 an environment where the agent is initialized at the center of the semi-circle. Each task in this environment corresponds to reaching a goal location (red dot) randomly sampled from the semi circle (green dots). This is also a sparse reward task where the agent receives a reward only if it is sufficiently close to the goal location. However, unlike the previous environments, we only allow the agent to sample 2 pre-update trajectories per task in order to identify the goal location. Thus the agent has to explore efficiently at each exploration step in order to perform reasonably at the task. FIG2 the trajectories taken by our exploration agent (orange and blue) and the exploitation/trained agent (green). Clearly, our agent has learnt to explore the environment. However, we know that a policy going around the periphery of the semi-circle would be a more useful exploration policy. In this environment we know that this exploration behavior can be reached by simply maximizing the environment rewards collected by the exploration policy. FIG3 shows this experiment where the exploration policy is trained using environment reward maximization while everything else is kept unchanged. We call this variant Ours-EnvReward. We also show the trajectories traversed by promp in FIG4 It is clear that it struggles to learn different exploration and exploitation behaviors. FIG5 shows the performance of our two variants along with the baselines. This experiment shows that decoupling the exploration and exploitation policies also allows us, the designers more flexibility at training them, i.e, it allows us to add any domain knowledge we might have regarding the exploration or the exploitation policies to further improve the performance. We also experiment with Walker2DRandParams and HopperRandParams. The different tasks in these environments arise from variations in the dynamics of the agent. The are shown in FIG7 We observe that in both these environments we match the performance of the baselines but don't really perform much better. This could be because we still use the n-step return as the self-supervision objective in our experiments. We expect the to get better if we test with next-state prediction etc as self-superivision objectives. We leave that for future work. For all the experiments, we treat the shared parameter z as a learnable latent embedding with fixed initial values of 0 as proposed in BID13,i.e, we don't perform any outer-loop updates on z.. The exploitation policy π θ,z (s) and the self-supervision network M β,z (s, a) concatenates z with their respective inputs. All the three networks (π, µ, M) have the same architecture (except inputs and output sizes) as that of the policy network in BID9 for all experiments. We also stick to the same values of hyper-parameters such as inner loop learning rate, gamma, tau and number of outer loop updates. We keep a constant embedding size of 32 and a constant N=15 (for computing the N-step returns) across all experiments and runs. We use the Adam BID5 optimizer with a learning rate of 7e − 4 for all parameters except φ, which uses a learning rate of 7e − 5. Also, we restrict ourselves to a single adaptation step in all environments, but it can be easily extended to multiple gradient steps as well by conditioning the exploration policy on the latent parameters z. We have provided a version of our code in the supplementary material. We will soon open source a cleaned version of this online. | We propose to use a separate exploration policy to collect the pre-adaptation trajectories in MAML. We also show that using a self-supervised objective in the inner loop leads to more stable training and much better performance. | 686 | scitldr |
The “Supersymmetric Artificial Neural Network” in deep learning (denoted (x; θ, bar{θ})Tw), espouses the importance of considering biological constraints in the aim of further generalizing backward propagation. Looking at the progression of ‘solution geometries’; going from SO(n) representation (such as Perceptron like models) to SU(n) representation (such as UnitaryRNNs) has guaranteed richer and richer representations in weight space of the artificial neural network, and hence better and better hypotheses were generatable. The Supersymmetric Artificial Neural Network explores a natural step forward, namely SU(m|n) representation. These supersymmetric biological brain representations (Perez et al.) can be represented by supercharge compatible special unitary notation SU(m|n), or (x; θ, bar{θ})Tw parameterized by θ, bar{θ}, which are supersymmetric directions, unlike θ seen in the typical non-supersymmetric deep learning model. Notably, Supersymmetric values can encode or represent more information than the typical deep learning model, in terms of “partner potential” signals for example. Introduction. Machine learning non-trivially concerns the application of families of functions that guarantee more and more variations in weight space. This means that machine learning researchers study what functions are best to transform the weights of the artificial neural network, such that the weights learn to represent good values for which correct hypotheses or guesses can be produced by the artificial neural network. The "Supersymmetric Artificial Neural Network" (or 'thought curvature') is reasonably yet another way to represent richer values in the weights of the model; because supersymmetric values can allow for more information to be captured about the input space. For example, supersymmetric systems can capture potential-partner signals, which is beyond the feature space of magnitude and phase signals learnt in typical real valued neural nets and deep complex neural networks respectively. As such, a brief historical progression of geometric solution spaces for varying neural network architectures follows:1. An optimal weight space produced by shallow or low dimension integer valued nodes or real valued artificial neural nets, may have good weights that lie for example, in one simple (ℤ ℝ −) cone per class/target group. (This may guarantee some variation, but not enough for more sophisticated tasks of higher dimension) BID42. An optimal weight space produced by deep and high-dimension-absorbing real valued artificial neural nets, may have good weights that lie in disentangleable (ℝ * ℝ −) manifolds per class/target group convolved by the operator *, instead of the simpler regions per class/target group seen in item. parameterized by,, which are supersymmetric directions, unlike seen in the typical nonsupersymmetric deep learning model. Notably, Supersymmetric values can encode or represent more information than the typical deep learning model, in terms of "partner potential" signals for example. This paper does not contain empirical code concerning supersymmetric artificial neural networks, although it does highlight empirical evidence, that indicates how such types of supersymmetric learning models could exceed the state of the art, due to preservation features seen in progressing through earlier related models from the days of older perceptron like models that were not supersymmetric.(This may guarantee more variation in the weight space than, leading to better hypotheses or guesses) BID5 3. An optimal weight space produced by shallow but high dimension-absorbing complex valued artificial neural nets, may have good weights that lie in multiple (ℂ −) sectors per class/target group, instead of the real regions per class/target group seen amongst the prior items. (This may guarantee more variation of the weight space than the previous items, by learning additional features, in the "phase space". This also leads to better hypotheses/guesses) BID6 4. An optimal weight space produced by deep or high dimension-absorbing complex valued artificial neural nets, may have good weights that lie in chi distribution bound, (ℂ * ℂ −) rayleigh space per class/target group convolved by the operator *, instead of the simpler sectors/regions per class/target group seen amongst the previous items. (This may guarantee more variation of the weight space than the prior items, by learning phase space representations, and by extension, strengthen these representations via convolutional residual blocks. This also leads to better hypotheses/guesses) BID7 5. The "Supersymmetric Artificial Neural Network" operable on high dimensional data, may reasonably generate good weights that lie in disentangleable (∞ ( |) − ) supermanifolds per class/target group, instead of the solution geometries seen in the prior items above. Supersymmetric values can encode rich partner-potential delimited features beyond the phase space of in accordance with cognitive biological space BID2, where lacks the partner potential formulation describable in Supersymmetric embedding. BID8 Another view of "solution geometry" history, which may promote a clear way to view the reasoning behind the subsequent pseudocode sequence.1. There has been a clear progression of "solution geometries", ranging from those of the ancient Perceptron BID4 to complex valued neural nets BID7, grassmann manifold artificial neural networks BID11 or unitaryRNNs. BID10 BID14 These models may be denoted by ⊤ parameterized by, expressible as geometrical groups ranging from orthogonal BID3 to special unitary group BID16 based: to …, and they got better at representing input data i.e. representing richer weights, thus the learning models generated better hypotheses or guesses. 2. By "solution geometry" I mean simply the class of regions where an algorithm's weights may lie, when generating those weights to do some task. 3. As such, if one follows cognitive science, one would know that biological brains may be measured in terms of supersymmetric operations. (Perez et al, "Supersymmetry at brain scale" BID2) 4. These supersymmetric biological brain representations can be represented by supercharge BID9 compatible special unitary notation (|), or (;,) ⊤ parameterized by, BID8, which are supersymmetric directions, unlike seen in item BID0. Notably, Supersymmetric values can encode or represent more information than the prior classes seen in, in terms of "partner potential" signals for example. 5. So, state of the art machine learning work forming or based solution geometries, although nonsupersymmetric, are already in the family of supersymmetric solution geometries that may be observed as occurring in biological brain or (|) supergroup BID15 representation. A naive supersymmetric artificial neural network architecture. (See points 1 to 5 in BID13) It seems feasible that a C ∞ bound atlas-based learning model, where said C ∞ is in the family of supermanifolds from supersymmetry, may be obtained from a system, which includes charts of grassmann manifold networks, and stiefel manifolds,, in (,) terms, where there exists some invertible submatrix entailing matrix ∈ (∩) for = where is a submersion mapping on some stiefel manifold,, thereafter enabling some differentiable grassmann manifold (ℝ) wherein = {∈ ℝ × : ≠ 0}. Pertinently, the "Edward Witten/String theory powered supersymmetric artificial neural network", is one wherein supersymmetric weights are sought. Many machine learning algorithms are not empirically shown to be exactly biologically plausible, i.e. Deep Neural Network algorithms, have not been observed to occur in the brain, but regardless, such algorithms work in practice in machine learning. Likewise, regardless of Supersymmetry's elusiveness at the LHC, as seen above, it may be quite feasible to borrow formal methods from strategies in physics even if such strategies are yet to show related physical phenomena to exist; thus it may be pertinent/feasible to try to construct a model that learns supersymmetric weights, as I proposed throughout this paper, following the progression of solution geometries going from to and onwards to (|) BID15. | Generalizing backward propagation, using formal methods from supersymmetry. | 687 | scitldr |
Regularization-based continual learning approaches generally prevent catastrophic forgetting by augmenting the training loss with an auxiliary objective. However in most practical optimization scenarios with noisy data and/or gradients, it is possible that stochastic gradient descent can inadvertently change critical parameters. In this paper, we argue for the importance of regularizing optimization trajectories directly. We derive a new co-natural gradient update rule for continual learning whereby the new task gradients are preconditioned with the empirical Fisher information of previously learnt tasks. We show that using the co-natural gradient systematically reduces forgetting in continual learning. Moreover, it helps combat overfitting when learning a new task in a low resource scenario. It is good to have an end to journey toward; but it is the journey that matters, in the end. Endowing machine learning models with the capability to learn a variety of tasks in a sequential manner is critical to obtain agents that are both versatile and persistent. However, continual learning of multiple tasks is hampered by catastrophic forgetting , the tendency of previously acquired knowledge to be overwritten when learning a new task. Techniques to mitigate catastrophic forgetting can be roughly categorized into 3 lines of work (see for a comprehensive overview): 1. regularization-based approaches, where forgetting is mitigated by the addition of a penalty term in the learning objective (; Chaudhry et al. (2018a), inter alia), 2. dynamic architectures approaches, which incrementally increase the model's capacity to accomodate the new tasks , and 3. memorybased approaches, which retain data from learned tasks for later reuse (; b; . Among these, regularization-based approaches are particularly appealing because they do not increase the model size and do not require access to past data. This is particularly relevant to real-world scenarios where keeping data from previous training tasks may be impractical because of infrastructural or privacy-related reasons. Moreover, they are of independent intellectual interest because of their biological inspiration rooted in the idea of synaptic consolidation . A good regularizer ensures that, when learning a new task, gradient descent will ultimately converge to parameters that yield good on the new task while preserving performance on previously learned tasks. Critically, this is predicated upon successful optimization of the regularized objective, a fact that has been largely taken for granted in previous work. Non-convexity of the loss function, along with noise in the data (due to small or biased datasets) or in the gradients (due to stochastic gradient descent), can yield optimization trajectories -and ultimately convergence points -that are highly non-deterministic, even for the same starting parameters. As we demonstrate in this paper, this can cause unintended catastrophic forgetting along the optimization path. This is illustrated in a toy setting in Figure 1: a two parameter model is trained to perform task T 2 (an arbitrary bi-modal loss function) after having learned task T 1 (a logistic regression task). Standard finetuning, even in T 2 Finetuning EWC Co-natural finetuning Figure 1: On the importance of trajectories: an example with 2-dimensional logistic regression. Having learned task T 1, the model is trained on T 2 with two different objectives: minimizing the loss on T 2 (Finetuning) and a regularized objective (EWC;). We add a small amount of Gaussian noise to gradients in order to simulate the stochasticity of the trajectory. Plain finetuning and EWC often converge to a solution with high loss for T 1, but the co-natural optimization trajectory consistently converges towards the optimum with lowest loss for T 1. the presence of a regularized objective (EWC;), quickly changes the loss of T 1 and tends converge to a solution with high T 1 loss. We propose to remedy this issue by regularizing the optimization trajectory itself, specifically by preconditioning gradient descent with the empirical Fisher information of previously learned tasks (§3). This yields what we refer to as a co-natural gradient, an update rule inspired by the natural gradient , but taking the Fisher information of previous tasks as a natural Riemannian metric 2 of the parameter space, instead of the Fisher information of the task being optimized for. When we introduce our proposed co-natural gradient for the toy example of Figure 1, the learning trajectory follows a path that changes the loss on T 1 much more slowly, and tends to converges to the optimum that incurs the lowest performance degradation on T 1. We test the validity of our approach in a continual learning scenario (§4). We show that the co-natural gradient consistently reduces forgetting in a variety of existing continual learning approaches by a factor of ≈ 1.5 to 9, and greatly improves performance over simple finetuning, without modification to the training objective. We further investigate the special case of transfer learning in a two-task, low-resource scenario. In this specific case, control over the optimization trajectory is particularly useful because the optimizer has to rely on early stopping to prevent overfitting to the meager amount of training data in the target task. We show that the co-natural gradient yields the best trade-offs between source and target domain performance over a variety of hyper-parameters (§5). We first give a brief overview of the continual learning paradigm and existing approaches for overcoming catastrophic forgetting. Let us define a task as a triplet containing an input space X and an output space Y, both measurable spaces, as well as a distribution D over X × Y. In general, learning a task will consist of training a model to approximate the conditional distribution p(y | x) induced by D. Consider a probabilistic model p θ parametrized by θ ∈ R d where d is the size of the model, trained to perform a source task S = X S, Y S, D S to some level of performance, yielding parameters θ S. In the most simple instance of continual learning, we are tasked with learning a second target task T = X T, Y T, D T. In general in a multitask setting, it is not the case that the input or output spaces are the same. The discrepancy between input/output space can be addressed in various ways, e.g. by adding a minimal number of task-specific parameters (for example, different softmax layers for different label sets). To simplify exposition, we set these more specific considerations aside for now, and assume that X S = X T and Y S = Y T. At any given point during training for task T, our objective will be to minimize the loss function L T (θ) -generally the expected log-likelihood E x,y∼D T [− log p θ (y | x)]. Typically, this will be performed by iteratively adding incremental update vectors δ ∈ R d to the parameters θ ←− θ + δ. In this paper, we focus on those models that have a fixed architecture over the course of continual learning. The study of continual learning for models of fixed capacity can be split into two distinct (but often overlapping) streams of work: Regularization-based approaches introduce a penalty in the loss function L T, typically quadratic, pushing the weights θ back towards θ S: where Ω S is a matrix, typically diagonal, that encodes the respective importance of each parameter with respect to task S, and λ is a regularization strength hyper-parameter. Various choices have been proposed for Ω S; the diagonal empirical Fisher information matrix , or pathintegral based importance measures (; a). More elaborate regularizers have been proposed based on e.g. a Bayesian formulation of continual learning or a distillation term . The main advantage of these approaches is that they do not rely on having access to training data of previous tasks. Memory-based approaches store data from previously seen tasks for re-use in continued learning, either as a form of constraint, by e.g. ensuring that training on the new task doesn't increase the loss on previous tasks (; b), or for replay i.e. by retraining on instances from previous tasks (; ; b; a). Various techniques have been proposed for the selection of samples to store in the memory (; b) or for retrieval of the samples to be used for replay Aljundi et al. (2019a). All of these methods rely on stochastic gradient descent to optimize their regularized objective or to perform experience replay, with the notable exception of GEM (; b), where the gradients are projected onto the orthogonal complement of previous task's gradients. However, this method has been shown to perform poorly in comparison with simple replay , and it still necessitates access to data from previous tasks. After briefly recalling how the usual update is obtained in gradient descent, we derive a new, conatural update designed to better preserve the distribution induced by the model over previous tasks. At point θ in the parameter space, gradient descent finds the optimal update δ that is small and locally minimizes the decrease in loss Traditionally this can be formulated as minimizing the Lagrangian: with Lagrangian multiplier µ > 0. Minimizing L for δ yields the well-known optimal update δ *: 1 2µ corresponds to the learning rate (see Appendix A.1 for the full derivation). The δ 2 term in L implicitly expresses the underlying assumption that the best measure of distance between parameters θ and θ + δ is the Euclidean distance. In a continual learning setting however, the quantity we are most interested in preserving is the probability distribution that θ models on the source task S: Therefore, a more natural distance between θ and θ + δ is the Kullback-Leibler divergence KL(p S θ p S θ+δ) . For preventing catastrophic forgetting along the optimization path, we incorporate incorporate this KL term into the Lagrangian L itself: Doing so means that the optimization trajectory will tend to follow the direction that changes the distribution of the model the least. Notably, this is not a function of the previous objective L S, so knowledge of the original training objective is not necessary during continual learning (which is typically the case in path-integral based regularization methods or experience replay ). Presuming that δ is small, we can perform a second order Taylor approximation of the function δ → KL(p where F S θ is the Hessian of the KL divergence around θ. A crucial, well-known property of this matrix is that it coincides with the Fisher information matrix (the expectation being taken over the model's distribution p θ ; see Appendix A.1 for details). This is appealing from a computational perspective because the Fisher can be computed by means of first order derivatives only. Minimizing for δ yields the following optimal update: where coefficients µ and ν are folded into two hyper-parameters: the learning rate λ and a damping coefficient α (the step-by-step derivation can be found in Appendix A.1). In practice, especially with low damping coefficients, it is common to obtain updates that are too large (typically when some parameters have no effect on the KL divergence). To address this, we re-normalize δ * to have the same norm as the original gradient, ∇L T. For computational reasons, we will make 3 key practical approximations to the Fisher:: we maintain the Fisher computed at θ S, instead of recomputing F S at every step of training. This relieves us of the computational burden of updating the Fisher for every new value of θ. This approximation (shared by previous work, e.g. ; Chaudhry et al. (2018a) ) is only valid insofar as θ S and θ are close. Empirically we observe that this still leads to good . S is diagonal: this is a common approximation in practice with two appealing properties. First, this makes it possible to store the d diagonal Fisher coefficients in memory. Second, this trivializes the inverse operation (simply invert the diagonal elements). this common approximation replaces the expectation under the model's distribution by the expected log-likelihood of the true distribution: T ] (mind the subscript). This is particularly useful in tasks with a large or unbounded number of classes (e.g. structured prediction), where summing over all possible outputs is intractable. We can then compute the diagonal of the empirical Fisher using Monte Carlo sampling: 2 with (x i, y i) sampled from D S (we use N = 1000 for all experiments). This formulation bears many similarities with the natural gradient from , which also uses the KL divergence as a metric for choosing the optimal update δ *. There is a however a crucial difference, both in execution and purpose: where the natural gradient uses knowledge of the curvature of the KL divergence of D T to speed-up convergence, our proposed method leverages the curvature of the KL divergence on D S to slow-down divergence from p S θ S. To highlight the resemblance and complementarity between these two concepts, we refer to the new update as the co-natural gradient. In a continual learning scenario, we are confronted with a large number of tasks T 1... T n presented in sequential order. When learning T n, we can change the Lagrangian L from 5 to incorporate the constraints for all previous tasks T 1... T n−1: This in turn changes the Fisher in Eq. 8 toF n−1:= The choice of the coefficients ν i is crucial. Setting all ν i to the same value, i.e. assigning the same importance to all tasks is suboptimal for a few reasons. First and foremost, it is unreasonable to expect of a model with finite capacity to remember an unbounded number of tasks (as tasks "fill-up" the model capacity, F n−1 is likely to become more "homogeneous"). Second, as training progresses and θ changes, our approximation that is less and less likely to hold. We address this issue in the same fashion as , by keeping a rolling exponential average of the Fisher matrices: In this case, previous tasks are gracefully forgotten at an exponential rate controlled by γ. We account for the damping α term in Eq. 7 by settingF 0:= α γ I. In preliminary experiments, we have found γ = 0.9 to yield consistently good , and use this value in all presented experiments. To corroborate our hypothesis that controlling the optimization trajectory with the co-natural gradient reduces catastrophic forgetting, we perform experiments on two continual learning testbeds: • Split CIFAR: The CIFAR100 dataset, split into 20 independent 5-way classification tasks. Similarly to Chaudhry et al. (2018b), we use a smaller version of the ResNet architecture . • Omniglot: the Omniglot dataset consists of 50 independent character recognition datasets on different alphabet. We adopt the setting of and consider each alphabet as a separate task. 4 On this dataset we use the same small CNN architecture as. • Split MiniImageNet: The MiniImageNet dataset (a subset of the popular ImageNet dataset 5; ). Split the dataset into 20 independent 5-way classification tasks, similarly to Split CIFAR, and use the same smaller ResNet. We adopt the experimental setup from: in each dataset we create a "validation set" of 3 tasks, used to select the best hyper-parameters, and keep the remaining tasks for evaluation. This split is chosen at random and kept the same across all experiments. In these datasets, the nature and possibly the number of classes for each task changes. We account for this by training a separate softmax layer for each task, and apply continual learning only to the remaining, "feature-extraction" part of the model. We report along two common metrics for continual learning: average accuracy, the accuracy at the end of training averaged over all tasks, and forgetting. Forgetting is defined in Chaudhry et al. (2018a) as the difference in performance on a task between the current model and the best performing model on this task. Formally if A T t represents the accuracy on task T at step t of training, the forgetting F T t at step t is defined as F We implement the co-natural update rule on top of 3 baselines: • Finetuning: Simply train the model on the task at hand, without any form of regularization. • EWC: Proposed by , it is a simple but effective quadratic regularization approach. While neither the most recent nor sophisticate regularization technique, it is a natural baseline for us to compare to in that it also consists in a Fisher-based penalty -although in the For visibility we only show accuracies for every fifth task. The rectangular shaded regions delineate the period during which each task is being trained upon; with the exception of ER, this is the only period the model has access to the data for this task. loss function instead of the optimization dynamics. We also use the rolling Fisher described in Section 3.4, making our EWC baseline equivalent to the superior online EWC introduced by. • ER: Experience replay with a fixed sized episodic memory proposed by. While not directly comparable to EWC in that it presupposes access to data from previous tasks, ER is a simple approach that boasts the best performances on a variety of benchmarks . In all experiments, we use memory size 1,000 with reservoir sampling. Training proceeds as follows: we perform exhaustive search on all the hyper-parameter combinations using the validation tasks. Every combination is reran 3 times (the order of tasks, model initialization and order of training examples changes with each restart), and rated by accuracy averaged over tasks and restarts. We then evaluate the best hyper-parameters by continual training on the evaluation tasks. Results are reported over 5 random restarts (3 for MiniImageNet), and we control for statistical significance using a paired t-test (we pair together runs with the same task ordering). We refer to Appendix A.2 for more details regarding fine-grained design choices. The upper half of Table 1 reports the average accuracy of all the tasks at the end of training (higher is better). We observe that the co-natural gradient always improves greatly over simple finetuning, and occasionally over EWC and ER. We note that on both datasets, bare-bone co-natural finetuning matches or exceeds the performance of EWC and ER even though it requires strictly fewer resources (no need to store the previous parameters as in EWC, or data in ER). Even more appreciable is the effect of the co-natural trajectories on forgetting, as shown in the lower half of Table 1. As evidenced by the in the lowest row, using the co-natural gradient systematically in large drops in forgetting across all approaches and both datasets, even when the average accuracy is not increased. To get a qualitative assessment of the learning trajectories that yield such , we visualize the accuracy curves of 10 out of the 47 evaluation tasks of Omniglot in Figure 2. We observe that previous approaches do poorly at keeping stable levels of performance over a long period of time (especially for tasks learned early in training), a problem that is largely resolved by the co-natural preconditioning. This seems to come at the cost of more intransigence (a), i.e. some of the later tasks are not being learnt properly. In models of fixed capacity, there is a natural trade-off between intransigence and forgetting (see also the "stability-plasticity" dilemma in neuroscience). Our position the co-natural gradient as a strong lowforgetting/moderate intransigence basis for future work. In this section we take a closer look at the specific case of adapting a model from a single task to another, when we only have access to a minimal amount of data in the target task. In this case, controlling the learning trajectory is particularly important because the model is being trained on an unreliable sample of the true distribution of the target task, and we have to rely on early-stopping to prevent overfitting. We show that using the co-natural gradient during adaptation helps both at preserving source task performance and reach higher overall target task performance. We perform experiments on two different scenarios: Image classification We take MiniImagenet as a source task and CUB (a 200-way birds species classification dataset;) as a target task. To guarantee a strong base model despite the small size of MiniImageNet, we start off from a ResNet18 model pretrained on the full ImageNet, which we retrofit to MiniImageNet by replacing the last fully connected layer with a separate linear layer regressed over the MiniImageNet training data. To simulate a low-resource setting, we sub-sample the CUB training set to 200 images (≈ 1 per class). Scores for these tasks are reported in terms of accuracy. Machine translation We consider adaptation of an English to French model trained on WMT15 (a dataset of parallel sentences crawled from parliamentary proceedings, news commentary and web page crawls;) to MTNT (a dataset of Reddit comments;). Our model is a Transformer pretrained on WMT15. Similarly to CUB, we simulate a low-resource setting by taking a sub-sample of 1000 sentence pairs as a training set. Scores for these two datasets are reported in terms of BLEU score. 6 Here we do not allow any access to data in the source task when training on the target task. We compare four methods Finetuning (our baseline), Co-natural finetuning, EWC (which has been proven effective for domain adaptation, see) and Co-natural EWC. Given that different methods might lead to different trade-offs between source and target task performance, with some variation depending on the hyper-parameters (e.g. learning rate, regularization strength. . .), we take inspiration from and graphically report for all hyper-parameter configuration of each method on the 2 dimensional space defined by the score on source and target tasks 7. Additionally, we highlight the Pareto frontier of each method i.e. the set of configurations that are not strictly worse than any other configuration for the same model. The adaptation for both scenarios are reported in Figure 3. We find that in both cases, the co-natural gradient not only helps preserving the source task performance, but to some extent it also allows the model to reach better performance on the target task as well. We take this to corroborate our starting hypothesis: while introducing a regularizer does help, controlling the optimization dynamics actively helps counteract overfitting to the very small amount of training data, because the co-natural pre-conditioning makes it harder for stochastic gradient descent to push the model towards directions that would also hurt the source task. We have presented the co-natural gradient, a technique that regularizes the optimization trajectory of models trained in a continual setting. We have shown that the co-natural gradient stands on its own as an efficient approach for overcoming catastrophic forgetting, and that it effectively complements and stabilizes other existing techniques at a minimal cost. We believe that the co-natural gradientand more generally, trajectory regularization -can serve as a solid bedrock for building agents that learn without forgetting. We solve the Lagrangian from Eq. 6 in a similar fashion as in A.1.1. First we compute its gradient and Hessian: ∇L = ∇L T + 2µδ + 2νF This section is intended to facilitate the reproduction of our . The full details can be found with our code at anonymized url. We split the dataset into 20 disjoint sub-tasks with each 5 classes, 2500 training examples and 500 test examples. This split, performed at random, is kept the same across all experiments, only the order of these tasks is changed. During continual training, we train the model for one epoch on each task with batch size 10, following the setup in Chaudhry et al. (2018b). We consider each alphabet as a separate task, and split each task such that every character is present 12, 4 and 4 times in the training, validation and test set respectively (out of the 20 images for each character). During continual training, we train for 2500 steps with batch size 32 (in keeping with). We ignore the validation data and simply evaluate on the test set at the end of training. For each method, we perform grid-search over the following parameter values: • Learning rate (all methods): 0.1, 0.03, 0.01 • Regularization strength (EWC, Co-natural EWC): 0.5, 1, 5 • Fisher damping coefficient (Co-natural finetuning, Co-natural EWC): 0,1.0,0.1 for Split CIFAR and 0,0.1,0.01 for Omniglot For ER, we simply set the batch size to the same value as standard training (10 and 32 for Split CIFAR and Omniglot respectively). Note that whenever applicable, we re-normalize the diagonal Fisher so that the sum of its weights is equal to the number of parameters in the model. This is so that the hyper-parameter choice is less dependent on the size of the model. In particular this means that the magnitude of each diagonal element is much bigger, which is why we do grid-search over smaller regularization parameters for EWC. | Regularizing the optimization trajectory with the Fisher information of old tasks reduces catastrophic forgetting greatly | 688 | scitldr |
We study the problem of generating source code in a strongly typed, Java-like programming language, given a label (for example a set of API calls or types) carrying a small amount of information about the code that is desired. The generated programs are expected to respect a `"realistic" relationship between programs and labels, as exemplified by a corpus of labeled programs available during training. Two challenges in such *conditional program generation* are that the generated programs must satisfy a rich set of syntactic and semantic constraints, and that source code contains many low-level features that impede learning. We address these problems by training a neural generator not on code but on *program sketches*, or models of program syntax that abstract out names and operations that do not generalize across programs. During generation, we infer a posterior distribution over sketches, then concretize samples from this distribution into type-safe programs using combinatorial techniques. We implement our ideas in a system for generating API-heavy Java code, and show that it can often predict the entire body of a method given just a few API calls or data types that appear in the method. Neural networks have been successfully applied to many generative modeling tasks in the recent past BID22 BID11 BID33. However, the use of these models in generating highly structured text remains relatively understudied. In this paper, we present a method, combining neural and combinatorial techniques, for the condition generation of an important category of such text: the source code of programs in Java-like programming languages. The specific problem we consider is one of supervised learning. During training, we are given a set of programs, each program annotated with a label, which may contain information such as the set of API calls or the types used in the code. Our goal is to learn a function g such that for a test case of the form (X, Prog) (where Prog is a program and X is a label), g(X) is a compilable, type-safe program that is equivalent to Prog. This problem has immediate applications in helping humans solve programming tasks BID12 BID26. In the usage scenario that we envision, a human programmer uses a label to specify a small amount of information about a program that they have in mind. Based on this information, our generator seeks to produce a program equivalent to the "target" program, thus performing a particularly powerful form of code completion. Conditional program generation is a special case of program synthesis BID19 BID32, the classic problem of generating a program given a constraint on its behavior. This problem has received significant interest in recent years BID2 BID10. In particular, several neural approaches to program synthesis driven by input-output examples have emerged BID3 BID23 BID5. Fundamentally, these approaches are tasked with associating a program's syntax with its semantics. As doing so in general is extremely hard, these methods choose to only generate programs in highly controlled domainspecific languages. For example, BID3 consider a functional language in which the only data types permitted are integers and integer arrays, control flow is linear, and there is a sum total of 15 library functions. Given a set of input-output examples, their method predicts a vector of binary attributes indicating the presence or absence of various tokens (library functions) in the target program, and uses this prediction to guide a combinatorial search for programs. In contrast, in conditional program generation, we are already given a set of tokens (for example library functions or types) that appear in a program or its metadata. Thus, we sidestep the problem of learning the semantics of the programming language from data. We ask: does this simpler setting permit the generation of programs from a much richer, Java-like language, with one has thousands of data types and API methods, rich control flow and exception handling, and a strong type system? While simpler than general program synthesis, this problem is still highly nontrivial. Perhaps the central issue is that to be acceptable to a compiler, a generated program must satisfy a rich set of structural and semantic constraints such as "do not use undeclared variables as arguments to a procedure call" or "only use API calls and variables in a type-safe way". Learning such constraints automatically from data is hard. Moreover, as this is also a supervised learning problem, the generated programs also have to follow the patterns in the data while satisfying these constraints. We approach this problem with a combination of neural learning and type-guided combinatorial search BID6. Our central idea is to learn not over source code, but over tree-structured syntactic models, or sketches, of programs. A sketch abstracts out low-level names and operations from a program, but retains information about the program's control structure, the orders in which it invokes API methods, and the types of arguments and return values of these methods. We propose a particular kind of probabilistic encoder-decoder, called a Gaussian Encoder-Decoder or GED, to learn a distribution over sketches conditioned on labels. During synthesis, we sample sketches from this distribution, then flesh out these samples into type-safe programs using a combinatorial method for program synthesis. Doing so effectively is possible because our sketches are designed to contain rich information about control flow and types. We have implemented our approach in a system called BAYOU. 1 We evaluate BAYOU in the generation of API-manipulating Android methods, using a corpus of about 150,000 methods drawn from an online repository. Our experiments show that BAYOU can often generate complex method bodies, including methods implementing tasks not encountered during training, given a few tokens as input. Now we define conditional program generation. Assume a universe P of programs and a universe X of labels. Also assume a set of training examples of the form {(X 1, Prog 1), (X 2, Prog 2),...}, where each X i is a label and each Prog i is a program. These examples are sampled from an unknown distribution Q(X, Prog), where X and Prog range over labels and programs, respectively. 2 We assume an equivalence relation Eqv ⊆ P × P over programs. If (Prog 1, Prog 2) ∈ Eqv, then Prog 1 and Prog 2 are functionally equivalent. The definition of functional equivalence differs across applications, but in general it asserts that two programs are "just as good as" one another. The goal of conditional program generation is to use the training set to learn a function g: X → P such that the expected value E[I((g(X), Prog) ∈ Eqv )] is maximized. Here, I is the indicator function, returning 1 if its boolean argument is true, and 0 otherwise. Informally, we are attempting to learn a function g such that if we sample (X, Prog) ∼ Q(X, P rog), g should be able to reconstitute a program that is functionally equivalent to Prog, using only the label X. In this paper, we consider a particular form of conditional program generation. We take the domain P to be the set of possible programs in a programming language called AML that captures the essence of API-heavy Java programs (see Appendix A for more details). AML includes complex control flow such as loops, if-then statements, and exceptions; access to Java API data types; and calls to Java API methods. AML is a strongly typed language, and by definition, P only includes programs DISPLAYFORM0 Figure 1: Programs generated by BAYOU with the API method name readLine as a label. Names of variables of type T whose values are obtained from the environment are of the form $T.that are type-safe. 3 To define labels, we assume three finite sets: a set Calls of possible API calls in AML, a set Types of possible object types, and a set Keys of keywords, defined as words, such as "read" and "file", that often appear in textual descriptions of what programs do. The space of possible labels is X = 2 Calls × 2 T ypes × 2 Keys (here 2 S is the power set of S).Defining Eqv in practice is tricky. For example, a reasonable definition of Eqv is that (Prog 1, Prog 2) ∈ Eqv iff Prog 1 and Prog 2 produce the same outputs on all inputs. But given the richness of AML, the problem of determining whether two AML programs always produce the same output is undecidable. As such, in practice we can only measure success indirectly, by checking whether the programs use the same control structures, and whether they can produce the same API call sequences. We will discuss this issue more in Section 6. Consider the label X = (X Calls, X Types, X Keys) where X Calls = {readLine} and X Types and X Keys are empty. Figure 1 (a) shows a program that our best learner stochastically returns given this input. As we see, this program indeed reads lines from a file, whose name is given by a special variable $String that the code takes as input. It also handles exceptions and closes the reader, even though these actions were not directly specified. Although the program in Figure 1 -(a) matches the label well, failures do occur. Sometimes, the system generates a program as in Figure 1 -(b), which uses an InputStreamReader rather than a FileReader. It is possible to rule out this program by adding to the label. Suppose we amend X T ypes so that X T ypes = {FileReader}. BAYOU now tends to only generate programs that use FileReader. The variations then arise from different ways of handling exceptions and constructing FileReader objects (some programs use a String argument, while others use a File object). Our approach is to learn g via maximum conditional likelihood estimation (CLE). That is, given a distribution family P (P rog|X, θ) for a parameter set θ, we choose θ * = arg max θ i log P (Prog i | X i, θ). Then, g(X) = arg max Prog P (Prog|X, θ *).The key innovation of our approach is that here, learning happens at a higher level of abstraction than (X i, Prog i) pairs. In practice, Java-like programs contain many low-level details (for example, variable names and intermediate ) that can obscure patterns in code. Further, they contain complicated semantic rules (for example, for type safety) that are difficult to learn from data. In contrast, these are relatively easy for a combinatorial, syntax-guided program synthesizer BID2 to deal with. However, synthesizers have a notoriously difficult time figuring out the correct "shape" of a program (such as the placement of loops and conditionals), which we hypothesize should be relatively easy for a statistical learner. Specifically, our approach learns over sketches: tree-structured data that capture key facets of program syntax. A sketch Y does not contain low-level variable names and operations, but carries information about broadly shared facets of programs such as the types and API calls. During generation, a program synthesizer is used to generate programs from sketches produced by the learner. Let the universe of all sketches be denoted by Y. The sketch for a given program is computed by applying an abstraction function α: P → Y. We call a sketch Y satisfiable, and write sat(Y), if α −1 (Y) = ∅. The process of generating (type-safe) programs given a satisfiable sketch Y is probabilistic, and captured by a concretization distribution P (Prog | Y, sat(Y)). We require that for all programs Prog and sketches Y such that sat(Y), we have DISPLAYFORM0 Importantly, the concretization distribution is fixed and chosen heuristically. The alternative of learning this distribution from source code poses difficulties: a single sketch can correspond to many programs that only differ in superficial details, and deciding which differences between programs are superficial and which are not requires knowledge about program semantics. In contrast, our heuristic approach utilizes known semantic properties of programming languages like oursfor example, that local variable names do not matter, and that some algebraic expressions are semantically equivalent. This knowledge allows us to limit the set of programs that we generate. Let us define a random variable Y = α(Prog). We assume that the variables X, Y and Prog are related as in the Bayes net in Figure 2. Specifically, given Y, Prog is conditionally independent of X. Further, let us assume a distribution family P (Y |X, θ) parameterized on θ. DISPLAYFORM1 DISPLAYFORM2 Our problem now simplifies to learning over sketches, i.e., finding FIG1 shows the full grammar for sketches in our implementation. Here, τ 0, τ 1,... range over a finite set of API data types that AML programs can use. A data type, akin to a Java class, is identified with a finite set of API method names (including constructors), and a ranges over these names. Note that sketches do not contain constants or variable names. DISPLAYFORM3 A full definition of the abstraction function for AML appears in Appendix B. As an example, API calls in AML have the syntax "call e.a(e 1, . . ., e k)", where a is an API method, the expression e evaluates to the object on which the method is called, and the expressions e 1,..., e k evaluate to the arguments of the method call. We abstract this call into an abstract method call "call τ.a(τ 1, . . ., τ k)", where τ is the type of e and τ i is the type of e i. The keywords skip, while, if-then-else, and trycatch preserve information about control flow and exception handling. Boolean conditions Cseq are replaced by abstract expressions: lists whose elements abstract the API calls in Cseq. Now we describe our learning approach. Equation 1 leaves us with the problem of computing arg max θ i log P (Y i |X i, θ), when each X i is a label and Y i is a sketch. Our answer is to utilize an encoder-decoder and introduce a real vector-valued latent variable Z to stochastically link labels and sketches: DISPLAYFORM0 is realized as a probabilistic decoder mapping a vector-valued variable to a distribution over trees. We describe this decoder in Appendix C. As for P (Z|X, θ), this distribution can, in principle, be picked in any way we like. In practice, because both P (Y |Z, θ) and P (Z|X, θ) have neural components with numerous parameters, we wish this distribution to regularize the learner. To provide this regularization, we assume a Normal (0, I) prior on Z.Recall that our labels are of the form X = (X Calls, X T ypes, X Keys), where X Calls, X Types, and X Keys are sets. Assuming that the j-th elements X Calls,j, X Types,j, and X Keys,j of these sets are generated independently, and assuming a function f for encoding these elements, let: DISPLAYFORM1 That is, the encoded value of each X Types,j, X Calls,j or X Keys,j is sampled from a high-dimensional Normal distribution centered at Z. If f is 1-1 and onto with the set R m then from Normal-Normal conjugacy, we have: DISPLAYFORM2 1+n I, where DISPLAYFORM3 Keys. Here, n Types is the number of types supplied, and n Calls and n Keys are defined similarly. Note that this particular P (Z|X, θ) only follows directly from the Normal (0, I) prior on Z and Normal likelihood P (X|Z, θ) if the encoding function f is 1-1 and onto. However, even if f is not 1-1 and onto (as will be the case if f is implemented with a standard feed-forward neural network) we can still use this probabilistic encoder, and in practice we still tend to see the benefits of the regularizing prior on Z, with P (Z) distributed approximately according to a unit Normal. We call this type of encoder-decoder, with a single, Normally-distributed latent variable Z linking the input and output, a Gaussian encoder-decoder, or GED for short. Now that we have chosen P (X|Z, θ) and P (Y |Z, θ), we must choose θ to perform CLE. Note that: DISPLAYFORM4 where the ≥ holds due to Jensen's inequality. Hence, L(θ) serves as a lower bound on the loglikelihood, and so we can compute θ * = arg max θ L(θ) as a proxy for the CLE. We maximize this lower bound using stochastic gradient ascent; as P (Z|X i, θ) is Normal, we can use the reparameterization trick common in variational auto-encoders BID14 while doing so. The parameter set θ contains all of the parameters of the encoding function f as well as σ Types, σ Calls, and σ Keys, and the parameters used in the decoding distribution funciton P (Y |Z, θ). The final step in our algorithm is to "concretize" sketches into programs, following the distribution P (Prog|Y). Our method of doing so is a type-directed, stochastic search procedure that builds on combinatorial methods for program synthesis BID28 BID6.Given a sketch Y, our procedure performs a random walk in a space of partially concretized sketches (PCSs). A PCS is a term obtained by replacing some of the abstract method calls and expressions in a sketch by AML method calls and AML expressions. For example, the term "x 1.a(x 2); τ 1.b(τ 2)", which sequential composes an abstract method call to b and a "concrete" method call to a, is a PCS. The state of the procedure at the i-th point of the walk is a PCS H i. The initial state is Y.Each state H has a set of neighbors Next(H). This set consists of all PCS-s H that are obtained by concretizing a single abstract method call or expression in H, using variable names in a way that is consistent with the types of all API methods and declared variables in H.The (i + 1)-th state in a walk is a sample from a predefined, heuristically chosen distribution P (H i+1 | H i,). The only requirement on this distribution is that it assigns nonzero probability to a state iff it belongs to Next(H i). In practice, our implementation of this distribution prioritizes programs that are simpler. The random walk ends when it reaches a state H * that has no neighbors. If H * is fully concrete (that is, an AML program), then the walk is successful and H * is returned as a sample. If not, the current walk is rejected, and a fresh walk is started from the initial state. Recall that the concretization distribution P (Prog|Y) is only defined for sketches Y that are satisfiable. Our concretization procedure does not assume that its input Y is satisfiable. However, if Y is not satisfiable, all random walks that it performs end with rejection, causing it to never terminate. While the worst-case complexity of this procedure is exponential in the generated programs, it performs well in practice because of our chosen language of sketches. For instance, our search does not need to discover the high-level structure of programs. Also, sketches specify the types of method arguments and return values, and this significantly limits the search space. Now we present an empirical evaluation of the effectiveness of our method. The experiments we describe utilize data from an online repository of about 1500 Android apps (and, 2017). We decompiled the APKs using JADX BID29 to generate their source code. Analyzing about 100 million lines of code that were generated, we extracted 150,000 methods that used Android APIs or the Java library. We then pre-processed all method bodies to translate the code from Java to AML, preserving names of relevant API calls and data types as well as the high-level control flow. Hereafter, when we say "program" we refer to an AML program. Figure 4: Statistics on labelsFrom each program, we extracted the sets X Calls, X Types, and X Keys as well as a sketch Y. Lacking separate natural language dscriptions for programs, we defined keywords to be words obtained by splitting the names of the API types and calls that the program uses, based on camel case. For instance, the keywords obtained from the API call readLine are "read" and "line". As API method and types in Java tend to be carefully named, these words often contain rich information about what programs do. Figure 4 gives some statistics on the sizes of the labels in the data. From the extracted data, we randomly selected 10,000 programs to be in the testing and validation data each. We implemented our approach in our tool called BAYOU, using TensorFlow BID1 to implement the GED neural model, and the Eclipse IDE for the abstraction from Java to the language of sketches and the combinatorial concretization. In all our experiments we performed cross-validation through grid search and picked the best performing model. Our hyper-parameters for training the model are as follows. We used 64, 32 and 64 units in the encoder for API calls, types and keywords, respectively, and 128 units in the decoder. The latent space was 32-dimensional. We used a mini-batch size of 50, a learning rate of 0.0006 for the Adam gradient-descent optimizer BID13, and ran the training for 50 epochs. The training was performed on an AWS "p2.xlarge" machine with an NVIDIA K80 GPU with 12GB GPU memory. As each sketch was broken down into a set of production paths, the total number of data points fed to the model was around 700,000 per epoch. Training took 10 hours to complete. To visualize clustering in the 32-dimensional latent space, we provided labels X from the testing data and sampled Z from P (Z|X), and then used it to sample a sketch from P (Y |Z). We then used t-SNE BID17 to reduce the dimensionality of Z to 2-dimensions, and labeled each point with the API used in the sketch Y. Figure 5 shows this 2-dimensional space, where each label has been coded with a different color. It is immediately apparent from the plot that the model has learned to cluster the latent space neatly according to different APIs. Some APIs such as java.io have several modes, and we noticed separately that each mode corresponds to different usage scenarios of the API, such as reading versus writing in this case. To evaluate prediction accuracy, we provided labels from the testing data to our model, sampled sketches from the distribution P (Y |X) and concretized each sketch into an AML program using our combinatorial search. We then measured the number of test programs for which a program that is equivalent to the expected one appeared in the top-10 from the model. As there is no universal metric to measure program equivalence (in fact, it is an undecidable problem in general), we used several metrics to approximate the notion of equivalence. We defined the following metrics on the top-10 programs predicted by the model:M1. This binary metric measures whether the expected program appeared in a syntactically equivalent form in the . Of course, an impediment to measuring this is that the names of variables used in the expected and predicted programs may not match. It is neither reasonable nor useful for any model of code to learn the exact variable names in the training data. Therefore, in performing this equivalence check, we abstract away the variable names and compare the rest of the program's Abstract Syntax Tree (AST) instead. M2. This metric measures the minimum Jaccard distance between the sets of sequences of API calls made by the expected and predicted programs. It is a measure of how close to the original program were we able to get in terms of sequences of API calls. M3. Similar to metric M2, this metric measures the minimum Jaccard distance between the sets of API calls in the expected and predicted programs. M4. This metric computes the minimum absolute difference between the number of statements in the expected and sampled programs, as a ratio of that in the former. M5. Similar to metric M4, this metric computes the minumum absolute difference between the number of control structures in the expected and sampled programs, as a ratio of that in the former. Examples of control structures are branches, loops, and try-catch statements. To evaluate our model's ability to predict programs given a small amount of information about its code, we varied the fraction of the set of API calls, types, and keywords provided as input from the testing data. We experimented with 75%, 50% and 25% observability in the testing data; the median number of items in a label in these cases were 9, 6, and 2, respectively. Figure 6: Accuracy of different models on testing data. GED-AML and GSNN-AML are baseline models trained over AML ASTs, GED-Sk and GSNN-Sk are models trained over sketches. In order to compare our model with state-of-the-art conditional generative models, we implemented the Gaussian Stochastic Neural Network (GSNN) presented by BID30, using the same tree-structured decoder as the GED. There are two main differences: (i) the GSNN's decoder is also conditioned directly on the input label X in addition to Z, which we accomplish by concatenating its initial state with the encoding of X, (ii) the GSNN loss function has an additional KL-divergence term weighted by a hyper-parameter β. We subjected the GSNN to the same training and crossvalidation process as our model. In the end, we selected a model that happened to have very similar hyper-parameters as ours, with β = 0.001. In order to evaluate the effect of sketch learning for program generation, we implemented and compared with a model that learns directly over programs. Specifically, the neural network structure is exactly the same as ours, except that instead of being trained on production paths in the sketches, the model is trained on production paths in the ASTs of the AML programs. We selected a model that had more units in the decoder compared to our model, as the AML grammar is more complex than the grammar of sketches. We also implemented a similar GSNN model to train over AML ASTs directly. Figure 6 shows the collated of this evaluation, where each entry computes the average of the corresponding metric over the 10000 test programs. It takes our model about 8 seconds, on average, to generate and rank 10 programs. When testing models that were trained on AML ASTs, namely the GED-AML and GSNN-AML models, we observed that out of a total of 87,486 AML ASTs sampled from the two models, 2525 (or 3%) ASTs were not even well-formed, i.e., they would not pass a parser, and hence had to be discarded from the metrics. This number is 0 for the GED-Sk and GSNN-Sk models, meaning that all AML ASTs that were obtained by concretizing sketches were well-formed. In general, one can observe that the GED-Sk model performs best overall, with GSNN-Sk a reasonable alternative. We hypothesize that the reason GED-Sk performs slightly better is the regularizing prior on Z; since the GSNN has a direct link from X to Y, it can choose to ignore this regularization. We would classify both these models as suitable for conditional program generation. However, the other two models GED-AML and GSNN-AML perform quite worse, showing that sketch learning is key in addressing the problem of conditional program generation. To evaluate how well our model generalizes to unseen data, we gather a subset of the testing data whose data points, consisting of label-sketch pairs (X, Y), never occurred in the training data. We then evaluate the same metrics in Figure 6 (a)-(e), but due to space reasons we focus on the 50% observability column. Figure 6 (f) shows the of this evaluation on the subset of 5126 (out of 10000) unseen test data points. The metrics exhibit a similar trend, showing that the models based on sketch learning are able to generalize much better than the baseline models, and that the GED-Sk model performs the best. Unconditional, corpus-driven generation of programs has been studied before BID18; BID4, as has the generation of code snippets conditioned on a context into which the snippet is merged BID21 BID26 BID20. These prior efforts often use models like n-grams BID21 and recurrent neural networks BID26 that are primarily suited to the generation of straight-line programs; almost universally, they cannot guarantee semantic properties of generated programs. Among prominent exceptions, BID18 use log-bilinear tree-traversal models, a class of probabilistic pushdown automata, for program generation. BID4 study a generalization of probabilistic grammars known as probabilistic higher-order grammars. Like our work, these papers address the generation of programs that satisfy rich constraints such as the type-safe use of names. In principle, one could replace our decoder and the combinatorial concretizer, which together form an unconditional program generator, with one of these models. However, given our experiments, doing so is unlikely to lead to good performance in the end-to-end problem of conditional program generation. There is a line of existing work considering the generation of programs from text BID34 BID16 BID25. These papers use decoders similar to the one used in BAYOU, and since they are solving the text-to-code problem, they utilize attention mechanisms not found in BAYOU. Those attention mechanisms could be particularly useful were BAYOU extended to handle natural language evidence. The fundamental difference between these works and BAYOU, however, is the level of abstraction at which learning takes place. These papers attempt to translate text directly into code, whereas BAYOU uses neural methods to produce higher-level sketches that are translated into program code using symbolic methods. This two-step code generation process is central to BAYOU. It ensures key semantic properties of the generated code (such as type safety) and by abstracting away from the learner many lower-level details, it may make learning easier. We have given experimental evidence that this approach can give better than translating directly into code. BID15 propose a variational autoencoder for context-free grammars. As an autoencoder, this model is generative, but it is not a conditional model such as ours. In their application of synthesizing molecular structures, given a particular molecular structure, their model can be used to search the latent space for similar valid structures. In our setting, however, we are not given a sketch but only a label for the sketch, and our task is learn a conditional model that can predict a whole sketch given a label. Conditional program generation is closely related to program synthesis BID10, the problem of producing programs that satisfy a given semantic specification. The programming language community has studied this problem thoroughly using the tools of combinatorial search and symbolic reasoning BID2 BID31 BID8 BID6. A common tactic in this literature is to put syntactic limitations on the space of feasible programs BID2. This is done either by adding a human-provided sketch to a problem instance BID31, or by restricting synthesis to a narrow DSL BID8 BID24.A recent body of work has developed neural approaches to program synthesis. Terpret and Neural Forth BID27 use neural learning over a set of user-provided examples to complete a user-provided sketch. In neuro-symbolic synthesis BID23 and RobustFill BID5, a neural architecture is used to encode a set of input-output examples and decode the ing representation into a Flashfill program. DeepCoder BID3 uses neural techniques to speed up the synthesis of Flashfill programs. These efforts differ from ours in goals as well as methods. Our problem is simpler, as it is conditioned on syntactic, rather than semantic, facets of programs. This allows us to generate programs in a complex programming language over a large number of data types and API methods, without needing a human-provided sketch. The key methodological difference between our work and symbolic program synthesis lies in our use of data, which allows us to generalize from a very small amount of specification. Unlike our approach, most neural approaches to program synthesis do not combine learning and combinatorial techniques. The prominent exception is Deepcoder BID3, whose relationship with our work was discussed in Section 1. We have given a method for generating type-safe programs in a Java-like language, given a label containing a small amount of information about a program's code or metadata. Our main idea is to learn a model that can predict sketches of programs relevant to a label. The predicted sketches are concretized into code using combinatorial techniques. We have implemented our ideas in BAYOU, a system for the generation of API-heavy code. Our experiments indicate that the system can often generate complex method bodies from just a few tokens, and that learning at the level of sketches is key to performing such generation effectively. An important distinction between our work and classical program synthesis is that our generator is conditioned on uncertain, syntactic information about the target program, as opposed to hard constraints on the program's semantics. Of course, the programs that we generate are type-safe, and therefore guaranteed to satisfy certain semantic constraints. However, these constraints are invariant across generation tasks; in contrast, traditional program synthesis permits instance-specific semantic constraints. Future work will seek to condition program generation on syntactic labels as well as semantic constraints. As mentioned earlier, learning correlations between the syntax and semantics of programs written in complex languages is difficult. However, the approach of first generating and then concretizing a sketch could reduce this difficulty: sketches could be generated using a limited amount of semantic information, and the concretizer could use logic-based techniques BID2 BID10 to ensure that the programs synthesized from these sketches match the semantic constraints exactly. A key challenge here would be to calibrate the amount of semantic information on which sketch generation is conditioned. A THE AML LANGUAGE AML is a core language that is designed to capture the essence of API usage in Java-like languages. Now we present this language. DISPLAYFORM0 AML uses a finite set of API data types. A type is identified with a finite set of API method names (including constructors); the type for which this set is empty is said to be void. Each method name a is associated with a type signature (τ 1, . . ., τ k) → τ 0, where τ 1,..., τ k are the method's input types and τ 0 is its return type. A method for which τ 0 is void is interpreted to not return a value. Finally, we assume predefined universes of constants and variable names. The grammar for AML is as in FIG4. Here, x, x 1,... are variable names, c is a constant, and a is a method name. The syntax for programs Prog includes method calls, loops, branches, statement sequencing, and exception handling. We use variables to feed the output of one method into another, and the keyword let to store the return value of a call in a fresh variable. Exp stands for (objectvalued) expressions, which include constants, variables, method calls, and let-expressions such as "let x = Call: Exp", which stores the return value of a call in a fresh variable x, then uses this binding to evaluate the expression Exp. (Arithmetic and relational operators are assumed to be encompassed by API methods.)The operational semantics and type system for AML are standard, and consequently, we do not describe these in detail. We define the abstraction function α for the AML language in Figure 9. In this section we present the details of the neural networks used by BAYOU. The task of the neural encoder is to implement the encoding function f for labels, which accepts an element from a label, say X Calls,i as input and maps it into a vector in d-dimensional space, where d is the dimensionality of the latent space of Z. To achieve this, we first convert each element X Calls,i into its one-hot vector representation, denoted X Calls,i. Then, let h be the number of neural hidden DISPLAYFORM0 DISPLAYFORM1 be real-valued weight and bias matrices of the neural network. The encoding function f (X Calls,i) can be defined as follows: DISPLAYFORM2 where tanh is a non-linearity defined as tanh(x) = 1−e −2x 1+e −2x. This would map any given API call into a d-dimensional real-valued vector. The values of entries in the matrices W h, b h, W d and b d will be learned during training. The encoder for types can be defined analogously, with its own set of matrices and hidden state. The task of the neural decoder is to implement the sampler for Y ∼ P (Y |Z). This is implemented recursively via repeated samples of production rules Y i in the grammar of sketches, drawn as DISPLAYFORM0 The generation of each Y i requires the generation of a new "path" from a series of previous "paths", where each path corresponds to a series of production rules fired in the grammar. As a sketch is tree-structured, we use a top-down tree-structured recurrent neural network similar to BID35, which we elaborate in this section. First, similar to the notion of a "dependency path" in BID35, we define a production path as a sequence of pairs (v 1, e 1), (v 2, e 2),..., (v k, e k) where v i is a node in the sketch (i.e., a term in the grammar) and e i is the type of edge that connects v i with v i+1. Our representation has two types of edges: sibling and child. A sibling edge connects two nodes at the same level of the tree and under the same parent node (i.e., two terms in the RHS of the same rule). A child edge connects a node with another that is DISPLAYFORM1 1 + e −2x and DISPLAYFORM2 for j ∈ 1... K Figure 11: Computing the hidden state and output of the decoder one level deeper in the tree (i.e., the LHS with a term in the RHS of a rule). We consider a sequence of API calls connected by sequential composition as siblings. The root of the entire tree is a special node named root, and so the first pair in all production paths is (root, child). The last edge in a production path is irrelevant (·) as it does not connect the node to any subsequent nodes. As an example, consider the sketch in FIG3 (a), whose representation as a tree for the decoder is shown in Figure 10. For brevity, we use s and c for sibling and child edges respectively, abbreviate some classnames with uppercase letters in their name, and omit the first pair (root, c) that occurs in all paths. There are four production paths in the tree of this sketch:Let h be the number of hidden units in the decoder, and |G| be the size of the decoder's output vocabulary, i.e., the total number of terminals and non-terminals in the grammar of sketches. and b e v ∈ R h be the input weight and bias matrices, and W e y ∈ R h×|G| and b e y ∈ R |G| be the output weight and bias matrices, where e is the type of edge: either c (child) or s (sibling). We also use "lifting" matrices W l ∈ R d×h and b l ∈ R h, to lift the d-dimensional vector Z onto the (typically) higher-dimensional hidden state space h of the decoder. Let h i and y i be the hidden state and output of the network at time point i. We compute these quantities as given in Figure 11, where tanh is a non-linear activation function that converts any given value to a value between -1 and 1, and softmax converts a given K-sized vector of arbitrary values to another K-sized vector of values in the range that sum to 1-essentially a probability distribution. The type of edge at time i decides which RNN to choose to update the (shared) hidden state h i and the output y i. Training consists of learning values for the entries in all the W and b matrices. During training, v i, e i and the target output are known from the data point, and so we optimize a standard cross-entropy loss function (over all i) between the output y i and the target output. During inference, P (v i+1 |Y i, Z) is simply the probability distribution y i, the of the softmax. A sketch is obtained by starting with the root node pair (v 1, e 1) = (root, child), recursively applying Equation 2 to get the output distribution y i, sampling a value for v i+1 from y i, and growing the tree by adding the sampled node to it. The edge e i+1 is provided as c or s depending on the v i+1 that was Figure 12: 2-dimensional projection of latent space of the GSNN-Sk model sampled. If only one type of edge is feasible (for instance, if the node is a terminal in the grammar, only a sibling edge is possible with the next node), then only that edge is provided. If both edges are feasible, then both possibilities are recursively explored, growing the tree in both directions. Remarks. In our implementation, we generate trees in a depth-first fashion, by exploring a child edge before a sibling edge if both are possible. If a node has two children, a neural encoding of the nodes that were generated on the left is carried onto the right sub-tree so that the generation of this tree can leverage additional information about its previously generated sibling. We refer the reader to Section 2.4 of BID35 for more details. In this section, we provide of additional experimental evaluation. Similar to the visualization of the 2-dimensional latent space in Figure 5, we also plotted the latent space of the GSNN-Sk model trained on sketches. Figure 12 shows this plot. We observed that the latent space is clustered, relatively, more densely than that of our model (keep in mind that the plot colors are different when comparing them). To give a sense of the quality of the end-to-end generation, we present and discuss a few usage scenarios for our system, BAYOU. In each scenario, we started with a set of API calls, types or keywords as labels that indicate what we (as the user) would like the generated code to perform. We then pick a single program in the top-5 returned by BAYOU and discuss it. FIG1 shows three such example usage scenarios. In the first scenario, we would like the system to generate a program to write something to a file by calling write using the type FileWriter. With this label, we invoked BAYOU and it returned with a program that actually accomplishes the task. Note that even though we only specified FileWriter, the program uses it to feed a BufferedWriter to write to a file. This is an interesting pattern learned from data, that file reads and writes in Java often take place in a buffered manner. Also note that the program correctly flushes the buffer before closing it, even though none of this was explicitly specified in the input. In the second scenario, we would like the generated program to set the title and message of an Android dialog. This time we provide no API calls or types but only keywords. With this, BAYOU generated a program that first builds an Android dialog box using the helper class AlertDialog. Builder, and does set its title and message. In addition, the program also adds a | We give a method for generating type-safe programs in a Java-like language, given a small amount of syntactic information about the desired code. | 689 | scitldr |
We propose an approach for sequence modeling based on autoregressive normalizing flows. Each autoregressive transform, acting across time, serves as a moving reference frame for modeling higher-level dynamics. This technique provides a simple, general-purpose method for improving sequence modeling, with connections to existing and classical techniques. We demonstrate the proposed approach both with standalone models, as well as a part of larger sequential latent variable models. Results are presented on three benchmark video datasets, where flow-based dynamics improve log-likelihood performance over baseline models. Data often contain sequential structure, providing a rich signal for learning models of the world. Such models are useful for learning self-supervised representations of sequences and planning sequences of actions . While sequential models have a longstanding tradition in probabilistic modeling , it is only recently that improved computational techniques, primarily deep networks, have facilitated learning such models from high-dimensional data , particularly video and audio. Dynamics in these models typically contain a combination of stochastic and deterministic variables (; ; ;), using simple distributions (e.g. Gaussian) to directly model the likelihood of data observations. However, attempting to capture all sequential dependencies with relatively unstructured dynamics may make it more difficult to learn such models. Intuitively, the model should use its dynamical components to track changes in the input instead of simultaneously modeling the entire signal. Rather than expanding the computational capacity of the model, we seek a method for altering the representation of the data to provide a more structured form of dynamics. To incorporate more structured dynamics, we propose an approach for sequence modeling based on autoregressive normalizing flows , consisting of one or more autoregressive transforms in time. A single transform is equivalent to a Gaussian autoregressive model. However, by stacking additional transforms or latent variables on top, we can arrive at more expressive models. Each autoregressive transform serves as a moving reference frame in which higher-level structure is modeled. This provides a general mechanism for separating different forms of dynamics, with higher-level stochastic dynamics modeled in the simplified space provided by lower-level deterministic transforms. In fact, as we discuss, this approach generalizes the technique of modeling temporal derivatives to simplify dynamics estimation . We empirically demonstrate this approach, both with standalone autoregressive normalizing flows, as well as by incorporating these flows within more flexible sequential latent variable models. While normalizing flows have been applied in a few sequential contexts previously, we emphasize the use of these models in conjunction with sequential latent variable models. We present experimental on three benchmark video datasets, showing improved quantitative performance in terms of log-likelihood. In formulating this general technique for improving dynamics estimation in the framework of normalizing flows, we also help to contextualize previous work. Figure 1: Affine Autoregressive Transforms. Computational diagrams for forward and inverse affine autoregressive transforms . Each y t is an affine transform of x t, with the affine parameters potentially non-linear functions of x <t. The inverse transform is capable of converting a correlated input, x 1:T, into a less correlated variable, y 1:T. Consider modeling discrete sequences of observations, x 1:T ∼ p data (x 1:T), using a probabilistic model, p θ (x 1:T), with parameters θ. Autoregressive models use the chain rule of probability to express the joint distribution over all time steps as the product of T conditional distributions. Because of the forward nature of the world, as well as for handling variable-length sequences, these models are often formulated in forward temporal order: Each conditional distribution, p θ (x t |x <t), models the temporal dependence between time steps, i.e. a prediction of the future. For continuous variables, we often assume that each distribution takes a relatively simple form, such as a diagonal Gaussian density: where µ θ (·) and σ θ (·) are functions denoting the mean and standard deviation, often sharing parameters over time steps. While these functions may take the entire past sequence of observations as input, e.g. through a recurrent neural network, they may also be restricted to a convolutional window (van den a). Autoregressive models can also be applied to non-sequential data (van den b), where they excel at capturing local dependencies. However, due to their restrictive distributional forms, such models often struggle to capture higher-level structure. Autoregressive models can be improved by incorporating latent variables, often represented as a corresponding sequence, z 1:T. Classical examples include Gaussian state space models and hidden Markov models . The joint distribution, p θ (x 1:T, z 1:T), has the following form: Unlike the simple, parametric form in Eq. 2, evaluating p θ (x t |x <t) now requires integrating over the latent variables, yielding a more flexible distribution. However, performing this integration in practice is typically intractable, requiring approximate inference techniques, like variational inference . Recent works have parameterized these models with deep neural networks, e.g. (; ; ;), using amortized variational inference for inference and learning. Typically, the conditional likelihood, p θ (x t |x <t, z ≤t), and the prior, p θ (z t |x <t, z <t), are Gaussian densities, with temporal conditioning handled through deterministic recurrent networks and the stochastic latent variables. Such models have demonstrated success in audio and video modeling (; ; ; ;). However, design choices for these models remain an active area of research, with each model proposing new combinations of deterministic and stochastic dynamics. Our approach is based on affine autoregressive normalizing flows . Here, we review this basic concept, continuing with the perspective of temporal sequences, however, it is worth noting that these flows were initially developed and demonstrated in static settings. noted that sampling from an autoregressive Gaussian model is an invertible transform, ing in a normalizing flow . Flow-based models transform between simple and complex probability distributions while maintaining exact likelihood evaluation. To see their connection to autoregressive models, we can express sampling a Gaussian random variable, x t ∼ p θ (x t |x <t) (Eq. 2), using the reparameterization trick : where y t ∼ N (y t ; 0, I) is an auxiliary random variable and denotes element-wise multiplication. Thus, x t is an invertible transform of y t, with the inverse given as where division is performed element-wise. The inverse transform in Eq. 6 acts to normalize (hence, normalizing flow) and therefore decorrelate x 1:T. Given the functional mapping between y t and x t in Eq. 5, the change of variables formula converts between probabilities in each space: log p θ (x 1:T) = log p θ (y 1:T) − log det ∂x 1:T ∂y 1:T. By the construction of Eqs. 5 and 6, the Jacobian in Eq. 7 is triangular, enabling efficient evaluation as the product of diagonal terms: where i denotes the observation dimension, e.g. pixel. For a Gaussian autoregressive model, p θ (y 1:T) = N (y 1:T ; 0, I). With these components, the change of variables formula (Eq. 7) provides an equivalent method for sampling and evaluating the model, p θ (x 1:T), from Eqs. 1 and 2. We can improve upon this simple set-up by chaining together multiple transforms, effectively ing in a hierarchical autoregressive model. Letting y m 1:T denote the variables after the m th transform, the change of variables formula for M transforms is log p θ (x 1:T) = log p θ (y Autoregressive flows were initially considered in the contexts of variational inference and generative modeling . These approaches are, in fact, generalizations of previous approaches with affine transforms. While autoregressive flows are well-suited for sequential data, as mentioned previously, these approaches, as well as many recent approaches (; ;), were initially applied in static settings, such as images. More recent works have started applying flow-based models to sequential data. For instance, van den and distill autoregressive speech models into flow-based models. by using flows to model dynamics of continuous latent variables. Like these recent works, we apply flow-based models to sequential data. However, we demonstrate that autoregressive flows can serve as a useful, general-purpose technique for improving sequence modeling as components of sequential latent variable models. To the best of our knowledge, our work is the first to focus on the aspect of using flows to pre-process sequential data to improve downstream dynamics modeling. Finally, we utilize affine flows (Eq. 5) in this work. This family of flows includes methods like NICE, RealNVP , IAF , MAF , and GLOW . However, there has been recent work in non-affine flows (; ;), which may offer further flexibility. We chose to investigate affine flows for their relative simplicity and connections to previous techniques, however, the use of non-affine flows could in additional improvements. We now describe our approach for sequence modeling with autoregressive flows. Although the core idea is a relatively straightforward extension of autoregressive flows, we show how this simple technique can be incorporated within autoregressive latent variable models (Section 2.2), providing a general-purpose approach for improving dynamics modeling. We first motivate the benefits of affine autoregressive transforms in the context of sequence modeling with a simple example. Consider the discrete dynamical system defined by the following set of equations: where w t ∼ N (w t ; 0, Σ). We can express x t and u t in probabilistic terms as Physically, this describes the noisy dynamics of a particle with momentum and mass 1, subject to Gaussian noise. That is, x represents position, u represents velocity, and w represents stochastic forces. If we consider the dynamics at the level of x, we can use the fact that Thus, we see that in the space of x, the dynamics are second-order Markov, requiring knowledge of the past two time steps. However, at the level of u (Eq. 13), the dynamics are first-order Markov, requiring only the previous time step. Yet, note that u t is, in fact, an affine autoregressive transform of x t because u t = x t −x t−1 is a special case of the general form. In Eq. 10, we see that the Jacobian of this transform is ∂x t /∂u t = I, so, from the change of variables formula, we have p(x t |x t−1, x t−2) = p(u t |u t−1). In other words, an affine autoregressive transform has allowed us to convert a second-order Markov system into a first-order Markov system, thereby simplifying the dynamics. Continuing this process to move to w t = u t − u t−1, we arrive at a representation that is entirely temporally decorrelated, i.e. no dynamics, because p(w t) = N (w t ; 0, Σ). A sample from this system is shown in Figure 2, illustrating this process of temporal decorrelation. The special case of modeling temporal changes, u t = x t −x t−1 = ∆x t, is a common pre-processing technique; for recent examples, see;;. In fact, ∆x t is a finite differences approximation of the generalized velocity of x, a classic modeling technique in dynamical models and control , redefining the state-space to be first-order Markov. Affine autoregressive flows offer a generalization of this technique, allowing for non-linear transform parameters and flows consisting of multiple transforms, with each transform serving to successively decorrelate the input sequence in time. In analogy with generalized velocity, each transform serves as a moving reference frame, allowing us to focus model capacity on less correlated fluctuations rather than the highly temporally correlated raw signal. We apply autoregressive flows across time steps within a sequence, x 1:T ∈ R T ×D. That is, the observation at each time step, x t ∈ R D, is modeled as an autoregressive function of past observations, x <t ∈ R t−1×D, and a random variable, y t ∈ R D (Figure 3a). We consider flows of the form given in Eq. 5, where µ θ (x <t) and σ θ (x <t) are parameterized by neural networks. In constructing chains of flows, we denote the shift and scale functions at the m th transform as µ m θ (·) and σ m θ (·) respectively. We then calculate y m using the corresponding inverse transform: After the final (M th) transform, the base distribution, p θ (y M 1:T), can range from a simple distribution, e.g. N (y M 1:T ; 0, I), in the case of a flow-based model, up to more complicated distributions in the case of other latent variable models (Section 3.3). While flows of greater depth can improve model capacity, such transforms have limiting drawbacks. In particular, 1) they require that the outputs maintain the same dimensionality as the inputs, R T ×D, 2) they are restricted to affine transforms, and 3) these transforms operate element-wise within a time step. As we discuss in the next section, we can combine autoregressive flows with non-invertible sequential latent variable models (Section 2.2), which do not have these restrictions. We can use autoregressive flows as a component in parameterizing the dynamics within autoregressive latent variable models. To simplify notation, we consider this set-up with a single transform, but a chain of multiple transforms (Section 3.2) can be applied within each flow. Let us consider parameterizing the conditional likelihood, p θ (x t |x <t, z ≤t), within a latent variable model using an autoregressive flow (Figure 3b). To do so, we express a base conditional distribution for y t, denoted as p θ (y t |y <t, z ≤t), which is then transformed into x t via the affine transform in Eq. 5. We have written p θ (y t |y <t, z ≤t) with conditioning on y <t, however, by removing temporal correlations to arrive at y 1:T, our hope is that these dynamics can be primarily modeled through z 1:T. Using the change of variables formula, we can express the latent variable model's log-joint distribution as log p θ (x 1:T, z 1:T) = log p θ (y 1:T, z 1:T) − log det where the joint distribution over y 1:T and z 1:T, in general, is given as Note that the latent prior, p θ (z t |y <t, z <t), can be equivalently conditioned on x <t or y <t, as there is a one-to-one mapping between these variables. We could also consider parameterizing the prior with autoregressive flows, or even constructing a hierarchy of latent variables. However, we leave these extensions for future work, opting to first introduce the basic concept here. Training a latent variable model via maximum likelihood requires marginalizing over the latent variables to evaluate the marginal log-likelihood of observations: log p θ (x 1:T) = log p θ (x 1:T, z 1:T)dz 1:T. This marginalization is typically intractable, requiring the use of approximate inference methods. Variational inference introduces an approximate posterior distribution, q(z 1:T |x 1:T), which provides a lower bound on the marginal log-likelihood: referred to as the evidence lower bound (ELBO). Often, we assume q(z 1:T |x 1:T) is a structured distribution, attempting to explicitly capture the model's temporal dependencies across z 1:T. We can consider both filtering or smoothing inference, however, we focus on the case of filtering, with The conditional dependencies in q can be modeled through a direct, amortized function, e.g. using a recurrent network , or through optimization. Again, note that we can condition q on x ≤t or y ≤t, as there exists a one-to-one mapping between these variables. With the model's joint distribution (Eq. 16) and approximate posterior (Eq. 19), we can then evaluate the ELBO. We derive the ELBO for this set-up in Appendix A, yielding This expression makes it clear that a flow-based conditional likelihood amounts to learning a latent variable model on top of the intermediate learned space provided by y, with an additional factor in the objective penalizing the scaling between x and y. From top to bottom, each figure shows 1) the original frames, x t, 2) the predicted shift, µ θ (x <t), for the frame, 3) the predicted scale, σ θ (x <t), for the frame, and 4) the noise, y t, obtained from the inverse transform. We demonstrate and evaluate the proposed framework on three benchmark video datasets: Moving MNIST , KTH Actions , and BAIR Robot Pushing . Experimental setups are described in Section 4.1, followed by a set of qualitative experiments in Section 4.2. In Section 4.3, we provide quantitative comparisons across different model classes. Further implementation details and visualizations can be found in Appendix B. Anonymized code is available at the following link. We implement three classes of models: 1) standalone autoregressive flow-based models, 2) sequential latent variable models, and 3) sequential latent variable models with flow-based conditional likelihoods. Flows are implemented with convolutional networks, taking in a fixed window of previous frames and outputting shift, µ θ, and scale, σ θ, parameters. The sequential latent variable models consist of convolutional and recurrent networks for both the encoder and decoder networks, following the basic form of architecture that has been previously employed in video modeling (; ;). In the case of a regular sequential latent variable model, the conditional likelihood is a Gaussian that models the frame, x t. In the case of a flow-based conditional likelihood, we model the noise variable, y t, with a Gaussian. In our experiments, the flow components have vastly fewer parameters than the sequential latent variable models. In addition, for models with flow-based conditional likelihoods, we restrict the number of parameters to enable a fairer comparison. These models have fewer parameters than the baseline sequential latent variable models (with non-flow-based conditional likelihoods). See Appendix B for parameter comparisons and architecture details. Finally, flow-based conditional likelihoods only add a constant computational cost per time-step, requiring a single forward pass per time step for both evaluation and generation. To better understand the behavior of autoregressive flows on sequences, we visualize each component as an image. In Figure 4, we show the data, x t, shift, µ θ, scale, σ θ, and noise variable, y t, for standalone flow-based models (left) and flow-based conditional likelihoods (right) on random sequences from the Moving MNIST and BAIR Robot Pushing datasets. Similar visualizations for KTH Actions are shown in Figure 8 in the Appendix. In Figure 9 in the Appendix, we also visualize these quantities for a flow-based conditional likelihood with two transforms. From these visualizations, we can make a few observations. The shift parameters (second row) tend to capture the static , blurring around regions of uncertainty. The scale parameters (third row), on the other hand, tend to focus on regions of higher uncertainty, as expected. The ing noise variables (bottom row) display any remaining structure not modeled by the flow. In comparing standalone flow-based models with flow-based conditional likelihoods in sequential latent variable models, we see that the latter qualitatively contains more structure in y, e.g. dots (Figure 4b, fourth row) or sharper edges (Figure 4d, fourth row). This is expected, as the noise distribution is more expressive in this case. With a relatively simple dataset, like Moving MNIST, a single flow can reasonably decorrelate the input, yielding white noise images (Figure 4a, fourth row). However, with natural image datasets like KTH Actions and BAIR Robot Pushing, a large degree of structure is still present in these images, motivating the use of additional model capacity to model this signal. In Appendix C.1, we quantify the degree of temporal decorrelation performed by flow-based models by evaluating the empirical correlation between frames at successive time steps for both the data, x, and the noise variables, y. In Appendix C.2, we provide additional qualitative . Log-likelihood for each model class are shown in Table 1. We report the average test loglikelihood in nats per pixel per channel for flow-based models and the lower bound on this quantity for sequential latent variable models. Standalone flow-based models perform surprisingly well, even outperforming sequential latent variable models in some cases. Increasing flow depth from 1 to 2 generally in improved performance. Sequential latent variable models with flow-based conditional likelihoods outperform their baseline counterparts, despite having fewer parameters. One reason for this disparity is overfitting. Comparing with the training performance reported in Table 3, we see that sequential latent variable models with flow-based conditional likelihoods overfit less. This is particularly apparent on KTH Actions, which contains training and test sets with a high degree of separation (different identities and activities). This suggests that removing static components, like s, yields a reconstruction space that is better for generalization. The quantitative in Table 1 are for a representative sequential latent variable model with a standard convolutional encoder-decoder architecture and fully-connected latent variables. However, many previous works do not evaluate proper lower bounds on log-likelihood, using techniques like down-weighting KL divergences (; ;). Indeed, train SVG with a proper lower bound and report a lower bound of −2.86 nats per pixel on KTH Actions, on-par with our . report log-likelihood on BAIR Robot Pushing, obtaining −1.3 nats per pixel, substantially higher than our . However, their model is significantly larger than the models presented here, consisting of 3 levels of latent variables, each containing 24 steps of flows. We have presented a technique for improving sequence modeling based on autoregressive normalizing flows. This technique uses affine transforms to temporally decorrelate sequential data, thereby simplifying the estimation of dynamics. We have drawn connections to classical approaches, which involve modeling temporal derivatives. Finally, we have empirically shown how this technique can improve sequential latent variable models. Consider the model defined in Section 3.3.1, with the conditional likelihood parameterized with autoregressive flows. That is, we parameterize The joint distribution over all time steps is then given as To perform variational inference, we consider a filtering approximate posterior of the form We can then plug these expressions into the evidence lower bound: Finally, in the filtering setting, we can rewrite the expectation, bringing it inside of the sum (see ;): Because there exists a one-to-one mapping between x 1:T and y 1:T, we can equivalently condition the approximate posterior and the prior on y, i.e. We store a fixed number of past frames in the buffer of each transform, to generate the shift and scale for the transform. For each stack of flow, 4 convolutional layers with kernel size, stride 1 and padding 1 are applied first on each data observation in the buffer, preserving the data shape. The outputs are concatenated along the channel dimension and go through another four convolutional layers also with kernel size, stride 1 and padding 1. Finally, separate convolutional layers with the same kernel size, stride and padding are used to generate shift and scale respectively. For latent variable models, we use a DC-GAN structure , with 4 layers of convolutional layers of kernel size, stride 2 and padding 1 before another convolutional layer of kernel size, stride 1 and no padding to encode the data. The encoded data is sent to an LSTM followed by fully connected layers to generate the mean and log-variance for estimating the approximate posterior distribution of the latent variable, z t. The conditional prior distribution is modeled with another LSTM followed by fully connected layers, taking the previous latent variable as input. The decoder take the inverse structure of the encoder. In the SLVM, we use 2 LSTM layers for modelling the conditional prior and approximate posterior distributions, while in the combined model we use 1 LSTM layer for each. We use the Adam optimizer with a learning rate of 1 × 10 −4 to train all the models. For Moving MNIST, we use a batch size of 16 and train for 200, 000 iterations for latent variable models and 100, 000 iterations for flow-based and latent variable models with flow-based likelihoods. For BAIR Robot Pushing, we use a batch size of 8 and train for 200, 000 iterations for all models. For KTH dataset we use a batch size of 8 and train for 90, 000 iterations for all models. Batch norm is applied to all convolutional layers that do not directly generate distribution or transform parameters. We randomly crop sequence of length 13 from all sequences and evaluate on the last 10 frames. (For 2-flow models we crop sequence of length 16 to fill up all buffers.) Anonymized code is available at the following link. fc fc (Figure 6 : Model Architecture Diagrams. Diagrams are shown for the (a) approximate posterior, (b) conditional prior, and (c) conditional likelihood of the sequential latent variable model. conv denotes a convolutional layer, LSTM denotes a long short-term memory layer, fc denotes a fullyconnected layer, and t conv denotes a transposed convolutional layer. For conv and t conv layers, the numbers in parentheses respectively denote the number of filters, filter size, stride, and padding of the layer. For fc and LSTM layers, the number in parentheses denotes the number of units. C ADDITIONAL EXPERIMENTAL The qualitative in Figures 4 and 8 demonstrate that flows are capable of removing much of the structure of the observations, ing in whitened noise images. To quantitatively confirm the temporal decorrelation ing from this process, we evaluate the empirical correlation between successive frames, averaged over spatial locations and channels, for the data observations and noise variables. This is an average normalized version of the auto-covariance of each signal with a time delay of 1 time step. Specifically, we estimate the temporal correlation as where x (i,j,k) denotes the value of the image at location (i, j) and channel k, µ (i,j,k) denotes the mean of this dimension, and σ (i,j,k) denotes the standard deviation of this dimension. H, W, and C respectively denote the height, width, and number of channels of the observations. We evaluated this quantity for data examples, x, and noise variables, y, for SLVM w/ 1-AF. The for training sequences are shown in Table 4. In Figure 7, we plot this quantity during training for KTH Actions. We see that flows do indeed in a decrease in temporal correlation. Note that because correlation is a measure of linear dependence, one cannot conclude from these alone From top to bottom, each figure shows 1) the original frames, x t, 2) the predicted shift, µ θ (x <t), for the frame, 3) the predicted scale, σ θ (x <t), for the frame, and 4) the noise, y t, obtained from the inverse transform. Figure 10: Generated Moving MNIST Samples. Samples frame sequences generated from a 2-AF model. Figure 11: Generated BAIR Robot Pushing Samples. Samples frame sequences generated from SLVM w/ 1-AF. | We show how autoregressive flows can be used to improve sequential latent variable models. | 690 | scitldr |
It is well-known that many machine learning models are susceptible to adversarial attacks, in which an attacker evades a classifier by making small perturbations to inputs. This paper discusses how industrial copyright detection tools, which serve a central role on the web, are susceptible to adversarial attacks. We discuss a range of copyright detection systems, and why they are particularly vulnerable to attacks. These vulnerabilities are especially apparent for neural network based systems. As proof of concept, we describe a well-known music identification method and implement this system in the form of a neural net. We then attack this system using simple gradient methods. Adversarial music created this way successfully fools industrial systems, including the AudioTag copyright detector and YouTube's Content ID system. Our goal is to raise awareness of the threats posed by adversarial examples in this space and to highlight the importance of hardening copyright detection systems to attacks. Machine learning systems are easily manipulated by adversarial attacks, in which small perturbations to input data cause large changes to the output of a model. Such attacks have been demonstrated on a number of potentially sensitive systems, largely in an idealized academic context, and occasionally in the real-world (; ; ; ; ;). Copyright detection systems are among the most widely used machine learning systems in industry, and the security of these systems is of foundational importance to some of the largest companies in the world. Despite their importance, copyright systems have gone largely unstudied by the ML security community. Common approaches to copyright detection extract features, called fingerprints, from sampled video or audio, and then match these features with a library of known fingerprints. Examples include YouTube's Content ID, which flags copyrighted material on YouTube and enables copyright owners to monetize and control their content. At the time of writing this paper, more than 100 million dollars have been spent on Content ID, which has ed in more than 3 billion dollars in revenue for copyright holders . Closely related tools such as Google Jigsaw detect and remove videos that promote terrorism or jeopardized national security. There is also a regulatory push for the use of copyright detection systems; the recent EU Copyright Directive requires any service that allows users to post text, sound, or video to implement a copyright filter. A wide range of copyright detection systems exist, most of which are proprietary. It is not possible to demonstrate attacks against all systems, and this is not our goal. Rather, the purpose of this paper is to discuss why copyright detectors are especially vulnerable to adversarial attacks and establish how existing attacks in the literature can potentially exploit audio and video copyright systems. As a proof of concept, we demonstrate an attack against real-world copyright detection systems for music. To do this, we reinterpret a simple version of the well-known "Shazam" algorithm for music fingerprinting as a neural network and build a differentiable implementation of it in TensorFlow . By using a gradient-based attack and an objective that is designed to achieve good transferability to black-box models, we create adversarial music that is easily recognizable to a human, while evading detection by a machine. With sufficient perturbations, our adversarial music successfully fools industrial systems, 1 including the AudioTag music recognition service , and YouTube's Content ID system . Work on adversarial examples has been focused largely on imaging problems, including image classification, object detection, and semantic segmentation (; ; ; ; ;). More recently, adversarial examples have been studied for non-vision applications such as speech recognition (i.e., speech-to-text) (; ; ;). Attacks on copyright detection systems are different from these applications in a number of important ways that in increased potential for vulnerability. First, digital media can be directly uploaded to a server without passing through a microphone or camera. This is drastically different from physical-world attacks, where adversarial perturbations must survive a data measurement process. For example, a perturbation to a stop sign must be effective when viewed through different cameras, resolutions, lighting conditions, viewing angles, motion blurs, and with different post-processing and compression algorithms. While attacks exist that are robust to these nuisance variables , this difficulty makes even whitebox attacks difficult, leaving some to believe that physical world attacks are not a realistic threat model (a; b). In contrast, a manipulated audio file can be uploaded directly to the web without passing it through a microphone that may render perturbations ineffective. Second, copyright detection is an open-set problem, in which systems process media that does not fall into any known class (i.e., doesn't correspond to any protected audio/video). This is different from the closed-set detection problem in which everything is assumed to correspond to a class. For example, a mobile phone application for music identification may solve a closed-set problem; the developers can assume that every uploaded audio clip corresponds to a known song, and when are uncertain there is no harm in guessing. By contrast, when the same algorithm is used for copyright detection on a server, developers must solve the open-set problem; nearly all uploaded content is not copyright protected, and should be labeled as such. In this case, there is harm in "guessing" an ID when are uncertain, as this may bar users from uploading non-protected material. Copyright detection algorithms must be tuned conservatively to operate in an environment where most content does not get flagged. Finally, copyright detection systems must handle a deluge of content with different labels despite strong feature similarities. Adversarial attacks are known to succeed easily in an environment where two legitimately different audio/video clips may share strong similarities at the feature level. This has been recognized for the ImageNet classification task , where feature overlap between classes (e.g., numerous classes exist for different types of cats/dogs/birds) makes systems highly vulnerable to untargeted attacks in which the attacker perturbs an object from its home class into a different class of high similarity. As a , state of the art defenses for untargeted attacks on ImageNet achieve far lower robustness than classifiers for simpler tasks . Copyright detection systems may suffer from a similar problem; they must discern between protected and non-protected content even when there is a strong feature overlap between the two. Fingerprinting algorithms typically work by extracting an ensemble of feature vectors (also called a "hash" in the case of audio tagging) from source content, and then matching these vectors to a library of known vectors associated with copyrighted material. If there are enough matches between a source sample and a library sample, then the two samples are considered identical. Most audio, image, and video fingerprinting algorithms either train a neural network to extract fingerprint features, or extract hand-crafted features. In the former case, standard adversarial methods lead to immediate susceptibility. In the latter case, feature extractors can often be re-interpreted and implemented as shallow neural networks, and then attacked (we will see an example of this below). For video fingerprinting, one successful approach by is to use object detectors to identify objects entering/leaving video frames. An extracted hash then consists of features describing the entering/leaving objects, in addition to the temporal relationships between them. While effective at labeling clean video, recent work has shown that object detectors and segmentation engines are easily manipulated to adversarially place/remove objects from frames ). Works such as build "robust" fingerprints by training networks on commonly used distortions (such as adding a border, adding noise, or flipping the video), but do not consider adversarial perturbations. While such networks are robust against pre-defined distortions, they will not be robust against white-box (or even black-box) adversarial attacks. Similarly, recent plagiarism detection systems such as rely on neural networks to generate a fingerprint for a document. While using the deep feature representations of a document as a fingerprint might in a higher accuracy for the plagiarism model, it potentially leaves the system open to adversarial attacks. Audio fingerprinting might appear to be more secure than the domains described above because practitioners typically rely on hand-crafted features rather than deep neural nets. However, we will see below that even hand-crafted feature extractors are susceptible to attacks. We now describe a commonly used audio fingerprinting/detection algorithm and show how one can build a differentiable neural network resembling this algorithm. This model can then be used to mount black-box attacks on real-world systems. An acoustic fingerprint is a feature vector that is useful for quickly locating a sample or finding similar samples in an audio database. Audio fingerprinting plays a central role in detection algorithms such as Content ID. Therefore, in this section, we describe a generic audio fingerprinting model that will ultimately help us generate adversarial examples. Due to the financially sensitive nature of copyright detection, there are very few publicly available fingerprinting models. One of the few widely used publicly known models is from the Shazam team . Shazam is a popular mobile phone app for identifying music. According to the Shazam paper, a good audio fingerprint should have the following properties: • Temporally localized: every fingerprint hash is calculated using audio samples that span a short time interval. This enables hashes to be matched to a short sub-sample of a song. • Translation invariant: fingerprint hashes are (nearly) the same regardless of where in the song a sample starts and ends. • Robust: hashes generated from the original clean database track should be reproducible from a degraded copy of the audio. The spectrogram of a signal, also called the short-time Fourier transform, is a plot that shows the frequency content (Fourier transform) of the waveform over time. After experimenting with various features for fingerprinting, chose to form hashes from the locations of spectrogram peaks. Spectrogram peaks have nice properties such as robustness in the presence of noise and approximate linear superposability. In the next subsection, we build a shallow neural network that captures the key ideas of , while adding extra layers that help produce transferable adversarial examples. In particular, we add an extra smoothing layer that makes our model difficult to attack and helps us craft strong attacks that can transfer to other black-box models. Here we describe the details of the generic neural network model we use for generating the audio fingerprints. Each layer of the network can be seen as a transformation that is applied to its input. We treat the output representation of our network as the fingerprint of the input audio signal. Ideally, we would like to extract features that can uniquely identify a signal while being independent of the exact start or end time of the sample. Convolutional neural networks maintain the temporally localized and translation invariant properties mentioned in section 4.1.1, and so we model the fingerprinting procedure using fully convolutional neural networks. The first network layer convolves with a normalized Hann function, which is a filter of the form where N is the width of the kernel. Convolving with a normalized Hann window smooths the adversarially perturbed audio waveform and the output of this layer is a perturbed but smooth audio sample that is then fingerprinted. This layer removes discontinuities and bad spectral properties that may be introduced into the signal during adversarial optimization and also makes the black-box attacks more efficient by preventing perturbations that do not transfer well to other models. The next convolutional layer computes the spectrogram (aka Short Term Fourier Transform) of the waveform and converts the audio signal from its original domain to a representation in the frequency domain. This is accomplished by convolving with an ensemble of N Fourier kernels of different frequencies, each with N output channels. This convolution has filters of the form where k ∈ 0, 1, · · ·, N − 1 is an output channel index and n ∈ 0, 1, · · ·, N − 1 is the index of the filter coefficient. After this convolution is computed, we apply |x| on the output to get the magnitude of the STFT. After the convolutional layers, we get a feature representation of the audio signal. We call this feature representation φ(x), where x is the input signal. This representation is susceptible to noise and a slight perturbation in the audio signal can change it. Furthermore, this representation is very dense which makes it relatively hard to store and search against all audio signals in the database. To address these issues, suggest using the local maxima of the spectrogram as features. We can find local maxima within our neural net framework by applying a max pooling function over the feature representation φ(x). We then find the places where the output of the maxpool equals the original feature representation (i.e., the locations where φ(x) = maxpool (φ(x))). The ing binary map of local maxima locations is the fingerprint of the signal and can be used to search for a signal against a database of previously processed signals. We will refer to this binary fingerprint as ψ (x) where x is the input signal. Figure 1 depicts the 2-layer convolutional network we use in this work for generating signal fingerprints. To craft an adversarial perturbation, we need a differentiable surrogate loss that measures how well an extracted fingerprint matches a reference. The CNN described in section 4.2 uses spectrogram peaks to generate fingerprints, but we did not yet specify a loss for quantifying how close two fingerprints are. Once we have such a loss, we can use standard gradient methods to find a perturbation δ that can be added to an audio signal to prevent copyright detection. To ensure the similarity between perturbed and clean audio, we bound the perturbation δ. That is, we enforce δ p ≤. Here. p is the p -norm of the perturbation and is the perturbation budget available to the adversary. In our experiments, we use the ∞ -norm as our measure of perturbation size. The simplest similarity measure between two binary fingerprints is simply the Hamming distance. Since the fingerprinting model outputs a binary fingerprint ψ(x), we can simply measure the number of local maxima that the signals x and y share by |ψ(x)·ψ(y)|. To make a differentiable loss function from this similarity measure, we use In the white box case where the fingerprinting system is known, attacks using the loss are extremely effective. However, attacks using this loss are extremely brittle and do not transfer well; one can minimize this loss by changing the locations of local maxima in the spectrogram by just one pixel. Such small changes in the spectrogram are unlikely to transfer reliably to black-box industrial systems. To improve the transferability of our adversarial examples, we propose a robust loss that promotes large movements in the local maxima of the spectrogram. We do this by moving the locations of local maxima in φ(x) outside of any neighborhood of the local maxima of φ(y). To efficiently implement this constraint within a neural net framework, we use two separate max pooling layers, one with a bigger width w 1 (the same width used in fingerprint generation), and the other with a smaller width w 2. If a location in the spectrogram yields output of the w 1 pooling strictly larger than the output of the w 2 pooling 2, we can be sure that there is no spectrogram peak within radius w 2 of that location. Equation 4 describes a loss function that penalizes the local maxima of x that are in the w 2 neighborhood of local maxima of y. This loss function forces the output of the max pooling layers to be different by at least a margin c. Finally, we make our loss function differentiable by replacing the maximum operator with the smoothed max function where α is a smoothing hyper parameter. As α → ∞, the smoothed max function more accurately approximates the exact max function. For simplicity, we chose α = 1 for all experiments. We solve the bounded optimization problem where x is the benign audio sample, and J is the loss function defined in equation 4 with the smoothed max function. Note that unlike common adversarial example generation problems from the literature, our formulation is a minimization problem because of how we defined the objective. We solve using projected gradient descent in which each iteration updates the perturbation using Adam , and then clips the perturbation to ensure that the ∞ constraint is satisfied. The optimization problem defined in equation 6 tries to create an adversarial example with a fingerprint that does not look like the original signal's fingerprint. While this approach can trick the search algorithm used in copyright detection systems by lowering its confidence, it can in unnatural sounding perturbations. Alternatively, we can try to enforce the perturbed signal's fingerprint to be similar to a different audio signal. Due to the approximate linear superposability characteristic of the spectrogram peaks, this will make the adversarial example sound more natural and like the target signal audio. To achieve this goal, we will first introduce a loss function that tries to make two signals look similar rather than different. As described in equation 7, such a loss can be obtained by replacing the order of max over big and small neighborhoods in equation 4. Note that we will still use the smooth maximum from equation 5. Using this loss function, we define the optimization problem in equation 8, which not only tries to make the adversarial example different from the original signal x, but also forces similarity to another signal y. Here λ is a scale parameter that controls how much we enforce the similarity between the fingerprints of x + δ and y. We call adversarial examples generated using equation 8 "remix" adversarial examples as they sound more like a remix, and refer to examples generated using equation 6 as default adversarial examples. While a successful attack's adversarial perturbation may be larger in the case of remix adversarial examples (due to the additional term in the objective function), the perturbation sounds more natural. We test the effectiveness of our black-box attacks on two real-world audio search/copyright detection systems. The inner workings of both systems are proprietary, and therefore it is necessary to attack these systems with black-box transfer attacks. Both systems claim to be robust against noise and other input signal distortions. We test our system on a dataset containing the top billboard songs from the past 10 years. We extract a 30-second fragment of these songs and craft both our default and remix adversarial examples for them. Although both types of adversarial examples can dodge detection, they have very different characteristics. The default adversarial examples (equation 6) work by removing identifiable frequencies from the original signal, while the remix adversarial examples (equation 8) work by introducing new frequencies to the signal that will confuse the real-world systems. Before evaluating black-box transfer attacks against real-world systems, we evaluate the effectiveness of a white-box attack against our own proposed model. Doing so will allow us to have a baseline of how effective an adversarial example can be if the details of a model are ever released or leaked. To create white-box attacks against our model, we use the loss function defined in equation 3. By optimizing this function, we can remove almost all of the fingerprints identified by our model with perturbations that are unnoticeable by humans. Table 1 shows the norms of the perturbations required to remove 90%, 95%, and 99% of the fingerprint hashes. AudioTag 3 is a free music recognition service with millions of songs in its database. When a user uploads a short audio fragment on this website, AudioTag compares the audio fragment against a database of songs and identifies what song this audio fragment belongs to. AudioTag claims to be "robust to sound distortions, noises and even speed variation, and will therefore recognize songs even in low quality audio recordings". 4 Therefore, one would expect that low-amplitude non-adversarial noise should not affect this system. As shown in Figure 2, AudioTag can accurately detect the songs corresponding to the benign signal. However, the system fails to detect both the default and remix adversarial examples built for them. During our experiments with AudioTag, we realized that this system is relatively sensitive to our proposed attacks and it can be fooled with relatively small perturbation budgets. Qualitatively, the magnitude of the noise required to fool this system is small and it is not easily noticeable by humans. Based on this observation, we suspect that the architecture of the fingerprinting model used in AudioTag may have similarities to our surrogate model in section 4.2. Table 2 shows the ∞ and 2 norms of the perturbations required to fool AudioTag on 90% of the songs in our dataset. We also verified AudioTag's claim of being robust to input distortions by applying random perturbations to the audio recordings. To fool AudioTag with random noise, the magnitude (∞) of the noise must be roughly 4 times larger than the noise we craft using equation 6. YouTube 5 is a video sharing website that allows users to upload their own video files. YouTube has developed a system called "Content ID 6 " to automatically tag user-uploaded content that contains copyrighted material. Using this system, copyright owners can submit their content and have YouTube scan uploaded videos against it. As shown in the screenshot in Figure 4, YouTube Content ID can successfully identify the benign songs we use in our experiment. At the time of writing this paper both our default and remix attacks successfully evade Content ID and go undetected. However, YouTube Content ID is significantly more robust to our attacks than AudioTag. To fool Content ID, we had to use a larger value for. This makes perturbations quite noticeable, although songs are still immediately recognizable by a human. Furthermore, a perturbation with non-adversarial random noise must have an ∞ norm 3 times larger than our adversarial perturbations to successfully avoid being detected. We repeated our experiments with identical hyper-parameters on the songs from our dataset. Table 2 shows the ∞ norms (i.e., the parameter) and 2 norms of the perturbations required to fool YouTube on 67% of the songs in our dataset. Furthermore, figure 3 shows the recall of YouTube's copyright detection tool on our dataset for different magnitudes of perturbations. Copyright detection systems are an important category of machine learning methods, but the robustness of these systems to adversarial attacks has not been addressed yet by the machine learning community. We discussed the vulnerability of copyright detection systems, and explain how different kinds of systems may be vulnerable to attacks using known methods. As a proof of concept, we build a simple song identification method using neural network primitives and attack it using well-known gradient methods. Surprisingly, attacks on this model transfer well to online systems. Note that none of the authors of this paper are experts in audio processing or fingerprinting systems. The implementations used in this study are far from optimal, and we expect that attacks can be strengthened using sharper technical tools, including perturbation types that are less perceptible to the human ear. Furthermore, we are doing transfer attacks using fairly rudimentary surrogate models that rely on hand-crafted features, while commercial system likely rely on full trainable neural nets. Our goal here is not to facilitate copyright evasion, but rather to raise awareness of the threats posed by adversarial examples in this space, and to highlight the importance of hardening copyright detection and content control systems to attack. A number of defenses already exist that can be utilized for this purpose, including adversarial training. | Adversarial examples can fool YouTube's copyright detection system | 691 | scitldr |
Equilibrium Propagation (EP) is a learning algorithm that bridges Machine Learning and Neuroscience, by computing gradients closely matching those of Backpropagation Through Time (BPTT), but with a learning rule local in space. Given an input x and associated target y, EP proceeds in two phases: in the first phase neurons evolve freely towards a first steady state; in the second phase output neurons are nudged towards y until they reach a second steady state. However, in existing implementations of EP, the learning rule is not local in time: the weight update is performed after the dynamics of the second phase have converged and requires information of the first phase that is no longer available physically. This is a major impediment to the biological plausibility of EP and its efficient hardware implementation. In this work, we propose a version of EP named Continual Equilibrium Propagation (C-EP) where neuron and synapse dynamics occur simultaneously throughout the second phase, so that the weight update becomes local in time. We prove theoretically that, provided the learning rates are sufficiently small, at each time step of the second phase the dynamics of neurons and synapses follow the gradients of the loss given by BPTT (Theorem 1). We demonstrate training with C-EP on MNIST and generalize C-EP to neural networks where neurons are connected by asymmetric connections. We show through experiments that the more the network updates follows the gradients of BPTT, the best it performs in terms of training. These bring EP a step closer to biology while maintaining its intimate link with backpropagation. A motivation for deep learning is that a few simple principles may explain animal intelligence and allow us to build intelligent machines, and learning paradigms must be at the heart of such principles, creating a synergy between neuroscience and Artificial Intelligence (AI) research. In the deep learning approach to AI , backpropagation thrives as the most powerful algorithm for training artificial neural networks. Unfortunately, its implementation on conventional computer or dedicated hardware consumes more energy than the brain by several orders of magnitude . One path towards reducing the gap between brains and machines in terms of power consumption is by investigating alternative learning paradigms relying on locally available information, which would allow radically different hardware implementations: such local learning rules could be used for the development of extremely energy efficient learning-capable hardware. Investigating such bioplausible learning schemes with real-world applicability is therefore of interest not only for neuroscience, but also for developing neuromorphic computing hardware that takes inspiration from information-encoding, dynamics and topology of the brain to reach fast and energy efficient AI . In these regards, Equilibrium Propagation (EP) is an alternative style of computation for estimating error gradients that presents significant advantages . EP belongs to the family of contrastive Hebbian learning (CHL) algorithms (; ;) and therefore benefits from an important feature of these algorithms: neural dynamics and synaptic updates depend solely on information that is locally available. As a CHL algorithm, EP applies to convergent RNNs, i.e. RNNs that are fed by a static input and converge to a steady state. Training such a convergent RNN consists in adjusting the weights so that the steady state corresponding to an input x produces output values close to associated targets y. CHL algorithms proceed in two phases: in the first phase, neurons evolve freely without external influence and settle to a (first) steady state; in the second phase, the values of output neurons are influenced by the target y and the neurons settle to a second steady state. CHL weight updates consist in a Hebbian rule strengthening the connections between co-activated neurons at the first steady state, and an anti-Hebbian rule with opposite effect at the second steady state. A difference between Equilibrium Propagation and standard CHL algorithms is that output neurons are not clamped in the second phase but elastically pulled towards the target y. A second key property of EP is that, unlike CHL and other related algorithms, it is intimately linked to backpropagation. It has been shown that synaptic updates in EP follow gradients of recurrent backpropagation (RBP) and backpropagation through time (BPTT) . This makes it especially attractive to bridge the gap between neural networks developed by neuroscientists, neuromorphic researchers and deep learning researchers. Nevertheless, the bioplausibility of EP still undergoes two major limitations. First, although EP is local in space, it is non-local in time. In all existing implementations of EP the weight update is performed after the dynamics of the second phase have converged, when the first steady state is no longer physically available. Thus the first steady state has to be artificially stored. Second, the network dynamics have to derive from a primitive function, which is equivalent to the requirement of symmetric weights in the Hopfield model. These two requirements are biologically unrealistic and also hinder the development of efficient EP computing hardware. In this work, we propose an alternative implementation of EP (called C-EP) which features temporal locality, by enabling synaptic dynamics to occur throughout the second phase, simultaneously with neural dynamics. We then address the second issue by adapting C-EP to systems having asymmetric synaptic connections, taking inspiration from; we call this modified version C-VF. More specifically, the contributions of the current paper are the following: • We introduce Continual Equilibrium Propagation (C-EP, Section 3.1-3.2), a new version of EP with continual weight updates: the weights of the network are adjusted continually in the second phase of training using local information in space and time. Neuron steady states do not need to be stored after the first phase, in contrast with standard EP where a global weight update is performed at the end of the second phase. Like standard EP, the C-EP algorithm applies to networks whose synaptic connections between neurons are assumed to be symmetric and tied. • We show mathematically that, provided that the changes in synaptic strengths are sufficiently slow (i.e. the learning rates are sufficiently small), at each time step of the second phase the dynamics of neurons and synapses follow the gradients of the loss obtained with BPTT (Theorem 1 and Fig. 2, Section 3.3). We call this property the Gradient Descending Dynamics (GDD) property, for consistency with the terminology used in. • We demonstrate training with C-EP on MNIST, with accuracy approaching the one obtained with standard EP (Section 4.2). • Finally, we adapt our C-EP algorithm to the more bio-realistic situation of a neural network with asymmetric connections between neurons. We call this modified version C-VF as it is inspired by the Vector Field method proposed in. We demonstrate this approach on MNIST, and show numerically that the training performance is correlated with the satisfaction of Gradient Descending Dynamics (Section 4.3). For completeness, we also show how the Recurrent Backpropagation (RBP) algorithm of; relates to C-EP, EP and BPTT. We illustrate the equivalence of these four algorithms on a simple analytical model (Fig. 3) and we develop their relationship in Appendix A. Convergent RNNs With Static Input. We consider the supervised setting, where we want to predict a target y given an input x. The model is a recurrent neural network (RNN) parametrized by θ and evolving according to the dynamics: F is the transition function of the system. Assuming convergence of the dynamics before time step T, we have s T = s * where s * is the steady state of the network characterized by The number of timesteps T is a hyperparameter that we choose large enough so that s T = s * for the current value of θ. The goal is to optimize the parameter θ in order to minimize a loss: Algorithms that optimize the loss L * for RNNs include Backpropagation Through Time (BPTT) and the Recurrent Backpropagation (RBP) algorithm of; , presented in Appendix B. Equilibrium Propagation (EP). EP is a learning algorithm that computes the gradient of L * in the particular case where the transition function F derives from a scalar function Φ, i.e. with F of the form F (x, s, θ) = ∂Φ ∂s (x, s, θ). The algorithm consists in two phases (see Alg. 1 of Fig. 1 have shown that the gradient of the loss L * can be estimated based on the two steady states s * and s β * . Specifically, in the limit This section presents the main theoretical contributions of this paper. We introduce a new algorithm to optimize L * (Eq. 3): a new version of EP with continual parameter updates that we call C-EP. Unlike typical machine learning algorithms (such as BPTT, RBP and EP) in which the weight updates occur after all the other computations in the system are performed, our algorithm offers a mechanism in which the weights are updated continuously as the states of the neurons change. The key idea to understand how to go from EP to C-EP is that the gradient of EP appearing in Eq. reads as the following telescopic sum: In Eq. we have used that s β 0 = s * and s β t → s β * as t → ∞. Here lies the very intuition of continual updates motivating this work; instead of keeping the weights fixed throughout the second phase and updating them at the end of the second phase based on the steady states s * and s β *, as in EP (Alg. 1 of Fig. 1), the idea of the C-EP algorithm is to update the weights at each time t of the second phase between two consecutive states s β t−1 and s β t (Alg. 2 of Fig. 1). One key difference in C-EP compared to EP though, is that, in the second phase, the weight update at time step t influences the neural states at time step t + 1 in a nontrivial way, as illustrated in the computational graph of Fig. 2. In the next subsection we define C-EP using notations that explicitly show this dependency. Left. Pseudo-code of EP. This is the version of EP for discrete-time dynamics introduced in. Right. Pseudo-code of C-EP with simplified notations (see section 3.2 for a formal definition of C-EP). Difference between EP and C-EP. In EP, one global parameter update is performed at the end of the second phase; in C-EP, parameter updates are performed throughout the second phase. Eq. 5 shows that the continual updates of C-EP add up to the global update of EP. The first phase of C-EP is the same as that of EP (see Fig. 1). In the second phase of C-EP the parameter variable is regarded as another dynamic variable θ t that evolves with time t along with s t. The dynamics of s t and θ t in the second phase of C-EP depend on the values of the two hyperparameters β (the hyperparameter of influence) and η (the learning rate), therefore we write s β,η t and θ β,η t to show explicitly this dependence. With now both the neurons and the synapses evolving in the second phase, the dynamic variables s The difference in C-EP compared to EP is that the value of the parameter used to update s β,η t+1 in Eq. is the current θ β,η t, not θ. Provided the learning rate η is small enough, i.e. the synapses are slow compared to the neurons, this effect is weak. Intuitively, in the limit η → 0, the parameter changes are negligible so that θ β,η t can be approximated by its initial value θ β,η 0 = θ. Under this approximation, the dynamics of s β,η t in C-EP and the dynamics of s β t in EP are the same. See Fig. 3 for a simple example, and Appendix A.3 for a proof in the general case. Now we prove that, provided the hyperparameter β and the learning rate η are small enough, the dynamics of the neurons and the weights given by Eq. follow the gradients of BPTT (Theorem 1 and Fig. 2). For a formal statement of this property, we define the normalized (continual) updates of C-EP, as well as the gradients of the loss L = (s T, y) after T time steps, computed with BPTT: which corresponds to the parameter gradient at time t, defined informally in Eq.. The following makes this statement more formal. Theorem 1 (GDD Property). Let s 0, s 1,..., s T be the convergent sequence of states and denote s * = s T the steady state. Further assume that there exists some step K where 0 < K ≤ T such that s * = s T = s T −1 =... s T −K. Then, in the limit η → 0 and β → 0, the first K normalized updates in the second phase of C-EP are equal to the negatives of the first K gradients of BPTT, i.e. Theorem 1 rewrites s, showing that in the second pĥase of C-EP, neurons and synapses descend the gradients of the loss L obtained with BPTT, with the hyperparameters β and η playing the role of learning rates for s β,η t and θ β,η t, respectively. Fig. 3 illustrates Theorem 1 with a simple dynamical system for which the normalized updates ∆ C−EP and the gradients ∇ BPTT are analytically tractable -see Appendix C for derivation details. In this section, we validate our continual version of Equilibrium Propagation against training on the MNIST data set with two models. The first model is a vanilla RNN with tied and symmetric weights: the dynamics of this model approximately derive from a primitive function, which allows training with C-EP. The second model is a Discrete-Time RNN with untied and asymmetric weights, which is therefore closer to biology. We train this second model with a modified version of C-EP which we call C-VF (Continual Vector Field) as it is inspired from the algorithm with Vector-Field dynamics of. showed with simulations the intuitive that, if a model is such that the normalized updates of EP'match' the gradients of BPTT (i.e. if they are approximately equal), then the model trained with EP performs as well as the model trained with BPTT. Along the same lines, we show in this work that the more the EP normalized updates follow the gradients of BPTT before training, the best is the ing training performance. We choose to implement C-EP and C-VF on vanilla RNNs to accelerate simulations . Vanilla RNN with symmetric weights trained by C-EP. The first phase dynamics is defined as: where σ is an activation function, W is a symmetric weight matrix connecting the layers s and W x is a matrix connecting the input x to the layers s. Although the dynamics are not directly defined in terms of a primitive function, note that s t+1 ≈ ∂Φ ∂s (s t, W) with Φ(s, W) = 1 2 s · W · s if we ignore the activation function σ. Following Eq. and Eq., we define the normalized updates of this model as: Note that this model applies to any topology as long as existing connections have symmetric values: this includes deep networks with any number of layers -see Appendix E for detailed descriptions of the models used in the experiments. More explicitly, for a network whose layers of neurons are s 0, s 1,..., s N, with W n,n+1 connecting the layers s n+1 and s n in both directions, the corresponding Vanilla RNN with asymmetric weights trained by C-VF. In this model, the dynamics in the first phase is the same as Eq. but now the weight matrix W is no longer assumed to be symmetric, i.e. the reciprocal connections between neurons are not constrained. In this setting the weight dynamics in the second phase is replaced by a version for asymmetric weights:, so that the normalized updates are equal to: Like the previous model, the vanilla RNN with asymmetric weights also applies to deep networks with any number of layers. Although in C-VF the dynamics of the weights is not one of the form of Eq. that derives from a primitive function, the (bioplausible) normalized weight updates of Eq. can approximately follow the gradients of BPTT, provided that the values of reciprocal connections are not too dissimilar: this is illustrated in Fig. 5 (as well as in Fig. 12 and Fig. 13 of Appendix E.6) and proved in Appendix D.2. This property motivates the following training experiments. Training on MNIST with EP, C-EP and C-VF. "#h" stands for the number of hidden layers. We indicate over 5 trials the mean and standard deviation for the test error (mean train error in parenthesis). T (resp. K) is the number of iterations in the 1 st (resp. 2 nd) phase. For C-VF , the initial angle between forward (θ f) and backward (θ b) weights is Ψ(θ f, θ b) = 0 •. Right: Test error rate on MNIST achieved by C-VF as a function of the initial Ψ(θ f, θ b). Experiments are performed with multi-layered vanilla RNNs (with symmetric weights) on MNIST. The table of Fig. 4.1 presents the obtained with C-EP training benchmarked against standard EP training -see Appendix E for model details and Appendix F.1 for training conditions. Although the test error of C-EP approaches that of EP, we observe a degradation in accuracy. This is because although Theorem 1 guarantees Gradient Descending Dynamics (GDD) in the limit of infinitely small learning rates, in practice we have to strike a balance between having a learning rate that is small enough to ensure this condition but not too small to observe convergence within a reasonable number of epochs. As seen on Fig. 5 (b), the finite learning rate η of continual updates leads to ∆ C−EP (β, η, t) curves splitting apart from the −∇ BPTT (t) curves. As seen per Fig. 5 (a), this effect is emphasized with the depth: before training, angles between the normalized updates of C-EP and the gradients of BPTT reach 50 degrees for two hidden layers. The deeper the network, the more difficult it is for the C-EP dynamics to follow the gradients provided by BPTT. As an evidence, we show in Appendix F.2 that when we use extremely small learning rates throughout the second phase (θ ← θ + η tiny ∆ C−EP θ) and rescale up the ing total weight update (θ ← θ − ∆θ tot + η ηtiny ∆θ tot), we recover standard EP . Depending on whether the updates occur continuously during the second phase and the system obey general dynamics with untied forward and backward weights, we can span a large range of deviations from the ideal conditions of Theorem 1. Fig. 5 (b) qualitatively depicts these deviations with a model for which the normalized updates of EP match the gradients of BPTT (EP); with continual weight updates, the normalized updates and gradients start splitting apart (C-EP), and even more so if the weights are untied (C-VF). Protocol. In order to create these deviations from Theorem 1 and study the consequences in terms of training, we proceed as follows. For each C-VF simulations, we tune the initial angle between forward weights (θ f) and backward weights (θ b) between 0 and 180 •. We denote this angle Ψ(θ f, θ b) -see Appendix F.1 for the angle definition and the angle tuning technique employed. For each of these weight initialization, we compute the angle between the total normalized update provided by C-VF, i.e. BPTT (tot) before training. This graphical representation spreads the algorithms between EP which best satisfies the GDD property (leftmost point in green at ∼ 20 •) to C-VF which satisfies the less the GDD property (rightmost points in red and orange at ∼ 100 •). As expected, high angles between gradients of C-VF and BPTT lead to high error rates that can reach 90% for Ψ ∆ C−VF (tot), −∇ BPTT (tot) over 100 •. More precisely, the inset of Fig. 5 shows the same data but focusing only on generated by initial weight angles lying below 90 From standard EP with one hidden layer to C-VF with two hidden layers, the test error increases monotonically with Ψ ∆(tot), −∇ BPTT (tot) but does not exceed 5.05% on average. This confirms the importance of proper weight initialization when weights are untied, also discussed in other context . When the initial weight angle is of 0 •, the impact of untying the weights on classification accuracy remains constrained, as shown in table of Fig. 4.1. Upon untying the forward and backward weights, the test error increases by ∼ 0.2% with one hidden layer and by ∼ 0.5% with two hidden layers compared to standard C-EP. Equilibrium Propagation is an algorithm that leverages the dynamical nature of neurons to compute weight gradients through the physics of the neural network. C-EP embraces simultaneous synapse and neuron dynamics, resolving the initial need of artificial memory units for storing the neuron values between different phases. The C-EP framework preserves the equivalence with Backpropagation Through Time: in the limit of sufficiently slow synaptic dynamics (i.e. small learning rates), the system satisfies Gradient Descending Dynamics (Theorem 1). Our experimental confirm this theorem. When training our vanilla RNN with symmetric weights with C-EP while ensuring convergence in 100 epochs, a modest reduction in MNIST accuracy is seen with regards to standard EP. This accuracy reduction can be eliminated by using smaller learning rates and rescaling up the total weight update at the end of the second phase (Appendix F.2). On top of extending the theory of , Theorem 1 also appears to provide a statistically robust tool for C-EP based learning. Our experimental show as in that, for a given network with specified neuron and synapse dynamics, the more the updates of Equilibrium Propagation follow the gradients provided by Backpropagation Through Time before training (in terms of angle in this work), the better this network can learn. Our C-EP and C-VF algorithms exhibit features reminiscent of biology. C-VF extends C-EP training to RNNs with asymmetric weights between neurons, as is the case in biology. Its learning rule, local in space and time, is furthermore closely acquainted to Spike Timing Dependent Plasticity (STDP), a learning rule widely studied in neuroscience, inferred in vitro and in vivo from neural recordings in the hippocampus . In STDP, the synaptic strength is modulated by the relative timings of pre and post synaptic spikes within a precise time window (; 2001). Each randomly selected synapse corresponds to one color. While dashed and continuous lines coincide for standard EP, they split apart upon untying the weights and using continual updates. Strikingly, the same rule that we use for C-VF learning can approximate STDP correlations in a rate-based formulation, as shown through numerical experiments by. From this viewpoint our work brings EP a step closer to biology. However, C-EP and C-VF do not aim at being models of biological learning per se, in that it would account for how the brain works or how animals learn, for which Reinforcement Learning might be a more suited learning paradigm. The core motivation of this work is to propose a fully local implementation of EP, in particular to foster its hardware implementation. When computed on a standard computer, due to the use of small learning rates to mimic analog dynamics within a finite number of epochs, training our models with C-EP and C-VF entail long simulation times. With a Titan RTX GPU, training a fully connected architecture on MNIST takes 2 hours 39 mins with 1 hidden layer and 10 hours 49 mins with 2 hidden layers. On the other hand, C-EP and C-VF might be particularly efficient in terms of speed and energy consumption when operated on neuromorphic hardware that employs analog device physics . To this purpose, our work can provide an engineering guidance to map our algorithm onto a neuromorphic system. Fig. 5 (a) shows that hyperparameters should be tuned so that before training, C-EP updates stay within 90 • of the gradients provided by BPTT. More concretely in practice, it amounts to tune the degree of symmetry of the dynamics, for instance the angle between forward and backward weights -see Fig. 4.1. Our work is one step towards bridging Equilibrium Propagation with neuromorphic computing and thereby energy efficient implementations of gradient-based learning algorithms. A PROOF OF THEOREM 1 In this appendix, we prove Theorem 1, which we recall here. Theorem 1 (GDD Property). Let s 0, s 1,..., s T be the convergent sequence of states and denote s * = s T the steady state. Further assume that there exists some step K where 0 < K ≤ T such that s * = s T = s T −1 =... s T −K. Then, in the limit η → 0 and β → 0, the first K normalized updates in the second phase of C-EP are equal to the negatives of the first K gradients of BPTT, i.e. A.1 A SPECTRUM OF FOUR COMPUTATIONALLY EQUIVALENT LEARNING ALGORITHMS Proving Theorem 1 amounts to prove the equivalence of C-EP and BPTT. In fact we can prove the equivalence of four algorithms, which all compute the gradient of the loss: 1. Backpropagation Through Time (BPTT), presented in Section B.2, 2. Recurrent Backpropagation (RBP), presented in Section B.3, 3. Equilibrium Propagation (EP), presented in Section 2, 4. Equilibrium Propagation with Continual Weight Updates (C-EP), introduced in Section 3. In this spectrum of algorithms, BPTT is the most practical algorithm to date from the point of view of machine learning, but also the less biologically realistic. In contrast, C-EP is the most realistic in terms of implementation in biological systems, while it is to date the least practical and least efficient for conventional machine learning (computations on standard Von-Neumann hardware are considerably slower due to repeated parameter updates, requiring memory access at each time-step of the second phase). Theorem 1 can be proved in three phases, using the following three lemmas. Lemma 2 (Equivalence of C-EP and EP). In the limit of small learning rate, i.e. η → 0, the (normalized) updates of C-EP are equal to those of EP: Lemma 3 (Equivalence of EP and RBP). Assume that the transition function derives from a primitive function, i.e. that F is of the form F (x, s, θ) = ∂Φ ∂s (x, s, θ). Then, in the limit of small hyperparameter β, the normalized updates of EP are equal to the gradients of RBP: Lemma 4 (Equivalence of BPTT and RBP). In the setting with static input x, suppose that the network has reached the steady state s * after T − K steps, i.e. Then the first K gradients of BPTT are equal to the first K gradient of RBP, i.e. Proofs of the Lemmas can be found in the following places: • The link between BPTT and RBP (Lemma 2) is known since the late 1980s and can be found e.g. in. We also prove it here in Appendix B. • Lemma 3 was proved in in the setting of real-time dynamics. • Lemma 4 is the new ingredient contributed here, and we prove it in Appendix A.3. Also a direct proof of the equivalence of EP and BPTT was derived in. First, recall the dynamics of C-EP in the second phase: starting from s β,η 0 = s * and θ β,η 0 = θ we have ∀t ≥ 0: We have also defined the normalized updates of C-EP: We also recall the dynamics of EP in the second phase: as well as the normalized updates of EP, as defined in: Lemma 2 (Equivalence of C-EP and EP). In the limit of small learning rate, i.e. η → 0, the (normalized) updates of C-EP are equal to those of EP: Proof of Lemma 2. We want to compute the limits of ∆ C−EP s (β, η, t) and ∆ C−EP θ (β, η, t) as η → 0 with η > 0. First of all, note that under mild assumptions -which we made here -of regularity on the functions Φ and (e.g. continuous differentiability), for fixed t and β, the quantities s It follows from Eq. and Eq. that Now let us compute lim η→0 (η>0) ∆ C−EP s (β, η, t). Using Eq., we have Similarly as before, for fixed t, is a continuous function of η. Therefore A consequence of Lemma 2 is that the total update of C-EP matches the total update of EP in the limit of small η, so that we retrieve the standard EP learning rule of Eq.. More explicitly, after K steps in the second phase and starting from θ, Eq.) In this section, we recall Backprop Through Time (BPTT) and the Almeida-Pineda Recurrent Backprop (RBP) algorithm, which can both be used to optimize the loss L * of Eq. 3. Historically, BPTT and RBP were invented separately around the same time. RBP was introduced at a time when convergent RNNs (such as the one studied in this paper) were popular. Nowadays, convergent RNNs are less popular; in the field of deep learning, RNNs are almost exclusively used for tasks that deal with sequential data and BPTT is the algorithm of choice to train such RNNs. Here, we present RBP in a way that it can be seen as a particular case of BPTT. Lemma 4, which we recall here, is a consequence of Proposition 5 and Definition 6 below. Lemma 4 (Equivalence of BPTT and RBP). In the setting with static input x, suppose that the network has reached the steady state s * after T − K steps, i.e. Then the first K gradients of BPTT are equal to the first K gradient of RBP, i.e. However, in general, the gradients ∇ BPTT (t) of BPTT and the gradients ∇ RBP (t) of RBP are not equal for t > K. This is because BPTT and RBP compute the gradients of different loss functions: • BPTT computes the gradient of the loss after T time steps, i.e. L = (s T, y), • RBP computes the gradients of the loss at the steady state, i.e. L * = (s *, y). Backpropagation Through Time (BPTT) is the standard method to train RNNs and can also be used to train the kind of convergent RNNs that we study in this paper. To this end, we consider the cost of the state s T after T time steps, denoted L = (s T, y), and we substitute the loss after T time steps L as a proxy for the loss at the steady state L * = (s *, y). The gradients of L can then be computed with BPTT. To do this, we recall some of the inner working mechanisms of BPTT. Eq. rewrites in the form s t+1 = F (x, s t, θ t+1 = θ), where θ t denotes the parameter of the model at time step t, the value θ being shared across all time steps. This way of rewriting Eq. enables us to define the partial derivative ∂L ∂θt as the sensitivity of the loss L with respect to θ t when θ 1,... θ t−1, θ t+1,... θ T remain fixed (set to the value θ). With these notations, the gradient ∂L ∂θ reads as the sum: BPTT computes the'full' gradient ∂L ∂θ by first computing the partial derivatives ∂L ∂st and ∂L ∂θt iteratively, backward in time, using the chain rule of differentiation. In this work, we denote the gradients that BPTT computes: Proposition 5 (Gradients of BPTT). The gradients ∇ BPTT s (t) and ∇ BPTT θ (t) satisfy the recurrence relationship B.3 FROM BACKPROP THROUGH TIME (BPTT) TO RECURRENT BACKPROP (RBP) In general, to apply BPTT, it is necessary to store in memory the history of past hidden states s 1, s 2,..., s T in order to compute the gradients ∇ BPTT s (t) and ∇ BPTT θ (t) as in Eq. 30-31. However, in our specific setting with static input x, if the network has reached the steady state s * after T − K steps, i.e. if s T −K = s T −K+1 = · · · = s T −1 = s T = s *, then we see that, in order to compute the first K gradients of BPTT, all one needs to know is ∂F ∂s (x, s *, θ) and ∂F ∂θ (x, s *, θ). To this end, all one needs to keep in memory is the steady state s *. In this particular setting, it is not necessary to store the past hidden states s T, s T −1,..., s T −K since they are all equal to s *. The Almeida-Pineda algorithm (a.k.a. Recurrent Backpropagation, or RBP for short), which was invented independently by and , relies on this property to compute the gradients of the loss L * using only the steady state s *. Similarly to BPTT, it computes quantities ∇ RBP s (t) and ∇ RBP θ (t), which we call'gradients of RBP', iteratively for t = 0, 1, 2,... RBP s (t) and ∇ RBP θ (t) are defined and computed iteratively as follows: Unlike in BPTT where keeping the history of past hidden states is necessary to compute (or 'backpropagate') the gradients, in RBP Eq. 33-34 show that it is sufficient to keep in memory the steady state s * only in order to iterate the computation of the gradients. RBP is more memory efficient than BPTT. Input: x, y, θ. Output: θ. 1: s 0 ← 0 2: for t = 0 to T − 1 do 3: Algorithm 4 RBP Input: x, y, θ. Output: θ. 1: s 0 ← 0 2: repeat 3: Figure 6: Left. Pseudo-code of BPTT. The gradients ∇(t) denote the gradients ∇ BPTT (t) of BPTT. Right. Pseudo-code of RBP. Difference between BPTT and RBP. In BPTT, the state s T −t is required to compute ∂F ∂s (x, s T −t, θ) and ∂F ∂θ (x, s T −t, θ); thus it is necessary to store in memory the sequence of states s 1, s 2,..., s T. In contrast, in RBP, only the steady state s * is required to compute ∂F ∂s (x, s *, θ) and ∂F ∂θ (x, s *, θ); it is not necessary to store the past states of the network. In this subsection we motivate the name of'gradients' for the quantities ∇ RBP s (t) and ∇ RBP θ (t) by proving that they are the gradients of L * in the sense of Proposition 7 below. They are also the gradients of what we call the'projected cost function' (Proposition 8), using the terminology of. Proposition 7 (RBP Optimizes L *). The total gradient computed by the RBP algorithm is the gradient of the loss L * = (s *, y), i.e. ∇ RBP s (t) and ∇ RBP θ (t) can also be expressed as gradients of L t = (s t, y), the cost after t time steps. In the terminology of, L t was named the projected cost. For t = 0, L 0 is simply the cost of the initial state s 0. For t > 0, L t is the cost of the state projected a duration t in the future. Proposition 8 (Gradients of RBP are Gradients of the Projected Cost). The'RBP gradients' ∇ RBP s (t) and ∇ RBP θ (t) can be expressed as gradients of the projected cost: where the initial state s 0 is the steady state s *. Proof of Proposition 7. First of all, by Definition 6 (Eq. 32-34) it is straightforward to see that Second, recall that the loss L * is where By the chain rule of differentiation, the gradient of L * (Eq. 39) is In order to compute ∂s * ∂θ, we differentiate the steady state condition (Eq. 40) with respect to θ, which yields Rearranging the terms, and using the Taylor expansion (Id − A) Therefore Proof of Proposition 8. By the chain rule of differentiation we have Evaluation this expression for s 0 = s * we get Model. To illustrate the equivalence of the four algorithms (BPTT, RBP, EP and CEP), we study a simple model with scalar variable s and scalar parameter θ: where s * is the steady state of the dynamics (it is easy to see that the solution is s * = θ). The dynamics rewrites s t+1 = F (s t, θ) with the transition function F (s, θ) = 1 2 (s + θ), and the loss rewrites L * = (s *) with the cost function (s) = 1 2 s 2. Furthermore, a primitive function of the system 1 is Φ(s, θ) = 1 4 (s + θ) 2. This model has no practical application; it is only meant for pedagogical purpose. With BPTT, an important point is that we approximate the steady state s * by the state after T time steps s T, and we approximate L * (the loss at the steady state) by the loss after T time steps L = (s T). In order to compute (i.e. 'backpropagate') the gradients of BPTT, Proposition 5 tells us that we need to compute The state after T time steps in BPTT converges to the steady state s * as T → ∞, therefore the gradients of BPTT converge to the gradients of RBP. Also notice that the steady state of the dynamics is s * = θ. Following the equations governing the second phase of EP (Fig. 1), we have: This linear dynamical system can be solved analytically: Notice that s β t → θ as β → 0; for small values of the hyperparameter β, the trajectory in the second phase is close to the steady state s * = θ. Using Eq. 19, it follows that the normalized updates of EP are Notice again that the normalized updates of EP converge to the gradients of RBP as β → 0. The system of equations governing the system is: 1 The primitive function Φ is determined up to a constant. First, rearranging the terms in the second equation, we get It follows that ∆ Therefore, all we need to do is to compute ∆ C−EP s (β, η, t). Second, by iterating the second equation over all indices from t = 0 to t − 1 we get Using s * = θ and plugging this into the first equation we get Solving this linear dynamical system, and using the initial condition s Finally: Step-by-step equivalence of the dynamics of EP and gradient computation in BPTT was shown in and was refered to as the Gradient-Descending Updates (GDU) property. In this appendix, we first explain the connection between the GDD property of this work and the GDU property of. Then we prove another version of the GDD property (Theorem 9 below), more general than Theorem 1. The GDU property of states that the (normalized) updates of EP are equal to the gradients of BPTT. Similarly, the Gradient-Descending Dynamics (GDD) property of this work states that the normalized updates of C-EP are equal to the gradients of BPTT. The difference between the GDU property and the GDD property is that the term'update' has slightly different meanings in the contexts of EP and C-EP. In C-EP, the'updates' are the effective updates by which the neuron and synapses are being dynamically updated throughout the second phase. In contrast in EP, the'updates' are effectively performed at the end of the second phase. The Gradient Descending Dynamics property (GDD, Theorem 1) states that, when the system dynamics derive from a primitive function, i.e. when the transition function F is of the form F = ∂Φ ∂s, then the normalized updates of C-EP match the gradients provided by BPTT. Remarkably, even in the case of the C-VF dynamics that do not derive from a primitive function Φ, Fig. 5 shows that the biologically realistic update rule of C-VF follows well the gradients of BPTT. More illustrations of this property are shown on Fig. 12 and Fig. 13. In this section we give a theoretical justification for this fact by proving a more general than Theorem 1. First, recall the dynamics of the C-VF model. In the first phase: where σ is an activation function and W is a square weight matrix. In the second phase, starting from s β,η 0 = s * and W β,η 0 = W, the dynamics read: Now let us define the transition function F (s, W) = σ(W · s), so that the dynamics of the first phase rewrites As for the second phase, notice that Now, recall the definition of the normalized updates of C-VF, as well as the gradients of the loss L = (s T, y) after T time steps, computed with BPTT: The loss L and the gradients ∇ ). Then, in the limit η → 0 and β → 0, the first K normalized updates of C-VF follow the the first K gradients of BPTT, i.e. ∀t = 0, 1,..., K: A few remarks need to be made: Ignoring the factor σ (W · s), we see that if W is symmetric then the Jacobian of F is also symmetric, in which case the conditions of Theorem 9 are met. 2. Theorem 1 is a special case of Theorem 9. To see why, notice that if the transition function F is of the form In this case the extra assumption in Theorem 9 is automatically satisfied. Theorem 9 is a consequence of Proposition 5 (Appendix B.2), which we recall here, and Lemma 10 below. BPTT s (t) and ∇ BPTT θ (t) satisfy the recurrence relationship Lemma 10 (Updates of C-VF). Define the (normalized) neural and weight updates of C-VF in the limit η → 0 and β → 0: They satisfy the recurrence relationship The proof of Lemma 10 is similar to the one provided in. In this section, we describe the C-EP and C-VF algorithms when implemented on multi-layered models, with tied weights and untied weights respectively. In the fully connected layered architecture model, the neurons are only connected between two consecutive layers (no skip-layer connections and no lateral connections within a layer). We denote neurons of the n-th layer as s n with n ∈ [0, N − 1], where N is the number of hidden layers. Layers are labelled in a backward fashion: n = 0 labels the output layer, n = 1 the first hidden layer starting from the output layer, and n = N − 1 the last hidden layer (before the input layer). Thus, there are N hidden layers in total. Fig. 7 shows this architecture with N = 2. Each model are presented here in a "real-time" and "discrete-time" settings For each model we lay out the equations of the neuron and synapse dynamics, we demonstrate the GDD property and we specify in which part of the main text they are used. We present in this order: Demonstrating the Gradient Descending Dynamics (GDD) property (Theorem 1) on MNIST. For this experiment, we consider the 784-512-... -512-10 network architecture, with 784 input neurons, 10 ouput neurons, and 512 neurons per hidden layer. The activation function used is σ(x) = tanh(x). The experiment consists of the following: we take a random MNIST sample (of size 1 × 784) and its associated target (of size 1 × 10). For a given value of the time-discretization parameter, we perform the first phase for T steps. Then, we perform on the one hand BPTT over K steps (to compute the gradients ∇ BPTT), on the other hand C-EP (or C-VF) over K steps for given values of β and η (to compute the normalized updates ∆ C−EP or ∆ C−VF) and compare the gradients and normalized updates provided by the two algorithms. Precise values of the hyperparameters, T, K, β and η are given in Tab. E.6. Equations with N = 2. We consider the layered architecture of Fig. 7, where s 0 denotes the output layer, and the feedback connections are constrained to be the transpose of the feedforward connections, i.e. W nn−1 = W n−1n. In the discrete-time setting of EP, the dynamics of the first phase are defined as: In the second phase the dynamics reads: As usual, y denotes the target. Consider the function: We can compute, for example: Comparing Eq. and Eq., and ignoring the activation function σ, we can see that And similarly for the layers s 0 and s 2. According to the definition of ∆ C−EP θ in Eq., for every layer and every t ∈ [0, K]: Simplifying the equations with N = 2. To go from our multi-layered architecture to the more general model presented in section 4.1. we define the state s of the network as the concatenation of all the layers' states, i.e. s = (s 2, s 1, s 0) and we define the weight matrices W and W x as: Note that Eq. and Eq. can be vectorized into: Generalizing the equations for any N. For a general architecture with a given N, the dynamics of the first phase are defined as: and those of the second phase as: where y denotes the target. Defining: ignoring the activation function σ, Eq. rewrites: According to the definition of ∆ C−EP θ in Eq., for every layer W nn+1 and every t ∈ [0, K]: Defining s = (s N, s N −1, . . ., s 0) and: Eq. and Eq. can also be vectorized into: Thereafter we introduce the other models in this general case. Context of use. This model is used for training experiments in Section 4.2 and Table 4.1. Equations. Recall that we consider the layered architecture of Fig. 7, where s 0 denotes the output layer. Just like in the discrete-time setting of EP, the dynamics of the first phase are defined as: Again, as in EP, the feedback connections are constrained to be the transpose of the feedforward connections, i.e. W nn−1 = W n−1n. In the second phase the dynamics reads: (β, η, t) ∀θ ∈ {W nn+1} As usual, y denotes the target. Since Eq. and Eq. are the same, the equations describing the C-EP model can also be written in a vectorized block-wise fashion, as in Eq. and Eq.. We can consequently define the C-EP model in Section 4.1 per Eq.. According to the definitions of Eq. and Eq., for every layer W nn+1 and every t ∈ [0, K]: Context of use. This model has not been used in this work. We only introduce it for completeness with respect to. Equations. For this model, the primitive function is defined as: so that the equations of motion read: In the second phase: ∀θ ∈ {W nn+1} where is a time-discretization parameter and y denotes the target. According the definition of the C-EP dynamics (Eq.), the definition of ∆ C−EP θ (Eq.) and the explicit form of Φ (Eq. 96), for all time step t ∈ [0, K], we have: Under review as a conference paper at ICLR 2020 Equations. Recall that we consider the layered architecture of Fig. 7, where s 0 denotes the output layer. The dynamics of the first phase in C-VF are defined as: Here, note the difference with EP and C-EP: the feedforward and feedback connections are unconstrained. In the second phase of C-VF: As usual y denotes the target. Note that Eq. can also be in a vectorized block-wise fashion as Eq. with s = (s 0, s 1, . . ., s N −1) and provided that we define W and W x as: For all layers W nn+1 and W n+1n, and every t ∈ [0, K], we define: Table E.6 for precise hyperparameters. Equations. For this model, the dynamics of the first phase are defined as: where is the time-discretization parameter. Again, as in the discre-time version of C-VF, the feedforward and feedback connections W nn−1 and W n−1n are unconstrained. In the second phase, the dynamics reads: where y denotes the target, as usual. For every feedforward connection matrix W nn+1 and every feedback connection matrix W n+1n, and for every time step t ∈ [0, K] in the second phase, we define In the following figures, we show the effect of using continual updates with a finite learning rate in terms of the ∆ C−EP and −∇ BPTT processes on different models introduced above. These figures have been realized either in the discrete-time or continuous-time setting with the fully connected layered architecture with one hidden layer on MNIST. Dashed an continuous lines respectively represent the normalized updates ∆ and the gradients ∇ BPTT. Each randomly selected synapse or neuron correspond to one color. We add an s or θ index to specify whether we analyse neuron or synapse updates and gradients. Each C-VF simulation has been realized with an angle between forward and backward weights of 0 degrees (i.e. Ψ(θ f, θ b) = 0 • ). For each figure, left panels demonstrate the GDD property with C-EP with η = 0 and the right panels show that, upon using η > 0, dashed and continuous lines start to split appart. Simulation framework. Simulations have been carried out in Pytorch. The code has been attached to the supplementary materials upon submitting this work on OpenReview. We have also attached a readme.txt with a specification of all dependencies, packages, descriptions of the python files as well as the commands to reproduce all the presented in this paper. Data set. Training experiments were carried out on the MNIST data set. Training set and test set include 60000 and 10000 samples respectively. Optimization. Optimization was performed using stochastic gradient descent with mini-batches of size 20. For each simulation, weights were Glorot-initialized. No regularization technique was used and we did not use the persistent trick of caching and reusing converged states for each data sample between epochs as in. Activation function. For training, we used the activation function Although it is a shifted and rescaled sigmoid function, we shall refer to this activation function as'sigmoid'. Use of a randomized β. The option'Random β' appearing in the detailed table of (Table 3) refers to the following procedure. During training, instead of using the same β accross mini-batches, we only keep the same absolute value of β and sample its sign from a Bernoulli distribution of probability 1 2 at each mini-batch iteration. This procedure was hinted at by to improve test error, and is used in our context to improve the model convergence for Continual Equilibrium Propagation -appearing as C-EP and C-VF in Table 4.1 -training simulations. Tuning the angle between forward and backward weights. In Table 4.1, we investigate C-VF initialized with different angles between the forward and backward weights -denoted as Ψ in Table 4.1. Denoting them respectively θ f and θ b, the angle κ between them is defined here as: where Tr denotes the trace, i.e. Tr(A) = i A ii for any squared matrix A. To tune arbitrarily well enough κ(θ f, θ b), the procedure is the following: starting from θ b = θ f, i.e. κ(θ f, θ b) = 0, we can gradually increase the angle between θ f and θ b by flipping the sign of an arbitrary proportion of components of θ b. The more components have their sign flipped, the larger is the angle. More formally, we write θ b in the form θ b = M (p) θ f and we define: where M (p) is a mask of binary random values {+1, -1} of the same dimension of θ f: M (p) = −1 with probability p and M (p) = +1 with probability 1 − p. Taking the cosine and the expectation of Eq., we obtain: Thus, the angle Ψ between θ f and θ f M (p) can be tuned by the choice of p through: Hyperparameter search for EP. We distinguish between two kinds of hyperparameters: the recurrent hyperparameters -i.e. T, K and β -and the learning rates. A first guess of the recurrent hyperparameters T and β is found by plotting the ∆ C−EP and ∇ BPTT processes associated to synapses and neurons to see qualitatively whether the theorem is approximately satisfied, and by conjointly computing the proportions of synapses whose ∆ C−EP W processes have the same sign as its ∇ BPTT W processes. K can also be found out of the plots as the number of steps which are required for the gradients to converge. Morever, plotting these processes reveal that gradients are vanishing when going away from the output layer, i.e. they lose up to 10 −1 in magnitude when going from a layer to the previous (i.e. upstream) layer. We subsequently initialized the learning rates with increasing values going from the output layer to upstreams layers. The typical range of learning rates is [10 −3, 10 −1], for T, for K and [0.01, 1] for β. Hyperparameters where adjusted until having a train error the closest to zero. Finally, in order to obtain minimal recurrent hyperparameters -i.e. smallest T and K possible -we progressively decreased T and K until the train error increases again. Table 2: Table of hyperparameters used for training. "C" and "VF" respectively denote "continual" and "vector-field", "-#h" stands for the number of hidden layers. The sigmoid activation is defined by Eq. Table 3: Training on MNIST with EP, C-EP and C-VF. "#h" stands for the number of hidden layers. We indicate over five trials the mean and standard deviation for the test error, the mean error in parenthesis for the train error. T (resp. K) is the number of iterations in the first (resp. second) phase. Full | We propose a continual version of Equilibrium Propagation, where neuron and synapse dynamics occur simultaneously throughout the second phase, with theoretical guarantees and numerical simulations. | 692 | scitldr |
There are two main lines of research on visual reasoning: neural module network (NMN) with explicit multi-hop reasoning through handcrafted neural modules, and monolithic network with implicit reasoning in the latent feature space. The former excels in interpretability and compositionality, while the latter usually achieves better performance due to model flexibility and parameter efficiency. In order to bridge the gap of the two, we present Meta Module Network (MMN), a novel hybrid approach that can efficiently utilize a Meta Module to perform versatile functionalities, while preserving compositionality and interpretability through modularized design. The proposed model first parses an input question into a functional program through a Program Generator. Instead of handcrafting a task-specific network to represent each function like traditional NMN, we use Recipe Encoder to translate the functions into their corresponding recipes (specifications), which are used to dynamically instantiate the Meta Module into Instance Modules. To endow different instance modules with designated functionality, a Teacher-Student framework is proposed, where a symbolic teacher pre-executes against the scene graphs to provide guidelines for the instantiated modules (student) to follow. In a nutshell, MMN adopts the meta module to increase its parameterization efficiency, and uses recipe encoding to improve its generalization ability over NMN. Experiments conducted on the GQA benchmark demonstrates that: MMN achieves significant improvement over both NMN and monolithic network baselines; MMN is able to generalize to unseen but related functions. Visual reasoning requires a model to learn strong compositionality and generalization abilities, i.e., understanding and answering compositional questions without having seen similar semantic compositions before. Such compositional visual reasoning is a hallmark for human intelligence that endows people with strong problem-solving skills given limited prior knowledge. Recently, neural module networks (NMNs) (a; ; b; ;) have been proposed to perform such complex reasoning tasks. First, NMN needs to pre-define a set of functions and explicitly encode each function into unique shallow neural networks called modules, which are composed dynamically to build an instance-specific network for each input question. This approach has high compositionality and interpretability, as each module is specifically designed to accomplish a specific sub-task and multiple modules can be combined to perform unseen combinations during inference. However, with increased complexity of the task, the set of functional semantics and modules also scales up. As observed in , this leads to higher model complexity and poorer scalability on more challenging scenarios. Another line of research on visual reasoning is focused on designing monolithic network architecture, such as MFB , BAN , DCN , and MCAN. These black-box methods have achieved state-of-the-art performance on more challenging realistic image datasets like VQA (a), surpassing the aforementioned NMN approach. They use a unified neural network to learn general-purpose reasoning skills , which is known to be more flexible and scalable without making strict assumption about the inputs or designing operation-specific networks for the predefined functional semantics. As the reasoning procedure is conducted in the latent feature space, the reasoning process is difficult to interpret. Such a model also lacks the ability to capture the compositionality of questions, thus suffering from poorer generalizability than module networks. Final steps Figure 1: The model architecture of Meta Module Network: the lower part describes how the question is translated into programs and instantiated into operation-specific modules; the upper part describes how execution graph is built based on the instantiated modules. Motivated by this, we propose a Meta Module Network (MMN) to bridge the gap, which preserves the merit of interpretability and compositionality of traditional module networks, but without requiring strictly defined modules for different semantic functionality. As illustrated in Figure 1, instead of handcrafting a shallow neural network for each specific function like NMNs, we propose a flexible meta (parent) module g(*, *) that can take a function recipe f as input and instantiates a (child) module g f (*) = g(*, f) to accomplish the functionality specified in the recipe. These instantiated modules with tied parameters are used to build an execution graph for answer prediction. The introduced meta module empowers the MMN to scale up to accommodate a larger set of functional semantics without adding complexity to the model itself. To endow each instance module with the designated functionality, we introduce module supervision to enforce each module g f (*) to imitate the behavior of its symbolic teacher learned from ground-truth scene graphs provided in the training data. The module supervision can dynamically disentangle different instances to accomplish small sub-tasks to maintain high compositionality. Our main contributions are summarized as follows. (i) We propose Meta Module Network for visual reasoning, in which different instance modules can be instantiated from a meta module. (ii) Module supervision is introduced to endow different functionalities to different instance modules. (iii) Experiments on GQA benchmark validate the outperformance of our model over NMN and monolithic network baselines. We also qualitatively provide visualization on the inferential chain of MMN to demonstrate its interpretability, and conduct experiments to quantitatively showcase the generalization ability to unseen functional semantics. The visual reasoning task (a) is formulated as follows: given a question Q grounded in an image I, where Q = {q 1, · · ·, q M} with q i representing the i-th word, the goal is to select an answer a ∈ A from a set A of possible answers. During training, we are provided with an additional scene graph G for each image I, and a functional program P for each question Q. During inference, scene graphs and programs are not provided. Figure 2: Architecture of the Coarse-to-fine Program Generator: the left part depicts the coarse-tofine two-stage generation; the right part depicts the ing execution graph. The Visual Encoder is based on a pre-trained object detection model that extracts from image I a set of regional features, where r i ∈ R Dv, N denotes the number of region of interest, and D v denotes the feature dimension. Similar to a Transformer block , we first use two self-attention networks, SA q and SA r, to encode the question and the regional features asQ = SA q (Q, Q; φ) andR = SA r (R, R; φ), respectively, whereQ ∈ R M ×D,R ∈ R N ×D, and D is the network's hidden dimension. Based on this, a cross-attention network CA is applied to use the question as guidance to refine the visual features into V = CA(R,Q; φ) ∈ R N ×D, whereQ is used as the query vector, and φ denotes all the parameters in the Visual Encoder. The attended visual features V will then be fed into the meta module, detailed in Sec. 2.3. We visualize the encoder in the Appendix for better illustration. Similar to other programming languages, we define a set of syntax rules for building valid programs and a set of semantics to determine the functionality of each program. Specifically, we define a set of functions F with their fixed arity n f ∈ {1, 2, 3, 4} based on the semantic string provided in Hudson & Manning (2019a). The definitions for all the functions are provided in the Appendix. The defined functions can be divided into 10 different categories based on their abstract semantics (e.g., "relate, verify, filter"), and each abstract function type is further implemented with different realizations depending on their arguments (e.g., "verify attribute, verify geometric, verify relation"). In total, there are 48 different functions defined, whose returned values could be List of Objects, Boolean or String. A program P is viewed as a sequence of function calls f 1, · · ·, f L. For example, in Figure 2, f 2 is Relate(, beside, boy), the functionality of which is to find a boy who is beside the objects returned by f 1: Select(ball). Formally, we call Relate the "function name", the "dependency", and beside, boy the "arguments". By exploiting the dependency relationship between functions, we build an execution graph for answer prediction. In order to generate syntactically plausible programs, we follow and adopt a coarse-to-fine two-stage generation paradigm, as illustrated in Figure 2. Specifically, the Transformer-based program generator first decodes a sketch containing only function names, and then fills the dependencies and arguments into the sketch to generate the program P. Such a two-stage generation process helps guarantee the plausibility and grammaticality of synthesized programs. We apply the known constraints to enforce the syntax in the fine-grained generation stage. For example, if function Filter is sketched, we know there are two tokens required to complete the function. The first token should be selected from the dependency set (,,...), while the second token should be selected from the attribute set (e.g., color, size). With these syntactic constraints, our program synthesizer can achieve a 98.8% execution accuracy. Instead of learning a full inventory of task-specific modules for different functions as in NMN (b), we design an abstract Meta Module that can instantiate a generic meta mod- ule into instance modules based on an input function recipe, which is a set of pre-defined keyvalue pairs specifying the properties of the function. As exemplified in Figure 3, when taking Function:relate; Geometric:to the left as the input, the Recipe Embedder produces a recipe vector to transform the meta module into a "geometric relation" module, which can search for target objects that the current object is to the left of. The left part of Figure 3 demonstrates the computation flow in Meta Module based on multi-head attention network . Specifically, a Recipe Embedder encodes a function recipe into a real-valued vector r f ∈ R D. In the first attention layer, r f is fed into an attention network g d as the query vector to incorporate the output (ô 1:K) of neighbor modules on which the current module is dependent. The intermediate output (o d) from this attention layer is further fed into a second attention network g v to incorporate the visual representation V of the image. The final output from the is denoted as g(r f,ô 1: Here is how the instantiation process of Meta Module works. First, we feed a function f to instantiate the meta module g into an instance module g f (ô 1:K, V; ψ), where ψ denotes the parameters of the meta module. The instantiated module is then used to build the execution graph on the fly as depicted in Figure 1. Each module g f outputs o(f) ∈ R D, which acts as the message passed to its neighbor modules. For brevity, we use o(f i) to denote the MMN's output at the i-th function f i. The final output o(f L) of function f L will be fed into a softmax-based classifier for answer prediction. During training, we optimize the parameters ψ (in Meta Module) and the parameters φ (in Visual Encoder) to maximize the likelihood p φ,ψ (a|P, Q, R) on the training data, where a is the answer, and P, Q, R are programs, questions and visual features, respectively. As demonstrated, Meta Module Network excels over standard module network in the following aspects. (i) The parameter space of different functions is shared, which means similar functions can be jointly optimized, benefiting from more efficient parameterization. For example, query color and verify color share the same partial parameters related to the input color. (ii) Our Meta Module can accommodate larger function semantics by using function recipes and scale up to more complex reasoning scenes. (iii) Since all the functions are embedded into the recipe space, functionality of an unseen recipe can be inferred from its neighboring recipes (see Sec. 3.4 for details), which equips our Meta Module with better generalization ability to unseen functions. In this sub-section, we explain how to extract supervision signals from scene graphs and programs provided in the training data, and how to adapt these learning signals during inference when no scene graphs or programs are available. We call this "Module Supervision", which is realized by a Teacher-Student framework as depicted in Figure 4. First, we define a Symbolic Executor as the'Teacher', which can traverse the ground-truth scene graph provided in training data and obtain intermediate by executing the programs. The'Teacher' exhibits these as guideline γ for the'Student' instance module g f to adhere to during training. i=1. Knowledge Transfer: As no scene graphs are provided during inference, we need to train a Student to mimic the Symbolic Teacher in associating objects between input images and generated programs for end-to-end model training. To this end, we compare the execution from the Symbolic Teacher with object detection from the Visual Encoder to provide learning guideline for the Student. Specifically, for the i-th step function f i, we compute the overlap between its execution b i and all the model-detected regions R as a i,j = Intersect(bi,rj) U nion(bi,rj). If j a i,j > 0, which means that there exists detected bounding boxes overlapping with the ground-truth object, we normalize a i,j over R to obtain a guideline distribution γ i,j = ai,j j ai,j and append an extra 0 in the end to obtain γ i ∈ R N +1. If j a i,j = 0, which means no detected bounding box has overlap with the ground-truth object (or as the learning guideline. The last bit represents "No Match". Student Training: To explicitly teach the student module g f to follow the learning guideline from the Symbolic Teacher, we add an additional head to each module output o(f i) to predict the execution distribution, denoted asγ i = sof tmax(M LP (o(f i))). During training, we propel the instance module to align its predictionγ i with the guideline distribution γ i by minimizing their KL divergence KL(γ i ||γ i). Formally, given the quadruple of (P, Q, R, a) and the pre-computed guideline distribution γ, we propose to add KL divergence to the standard loss function with a balancing factor η: In this section, we conduct the following experiments. (i) We evaluate the proposed Meta Module Network on the GQA v1.1 dataset (a), and compare with the state-ofthe-art methods. (ii) We provide visualization of the inferential chains and perform fine-grained error analysis based on that. (iii) We design synthesized experiments to quantitatively measure our model's generalization ability towards unseen functional semantics. Dataset The GQA dataset contains 22M questions over 140K images. This full "all-split" dataset has unbalanced answer distributions, thus, is further re-sampled into a "balanced-split" with a more balanced answer distribution. The new split consists of 1M questions. Compared with the VQA v2.0 dataset , the questions in GQA are designed to require multi-hop reasoning to test the reasoning skills of developed models. Compared with the CLEVR dataset (a), GQA greatly increases the complexity of the semantic structure of questions, leading to a more diverse function set. The real-world images in GQA also bring in a bigger challenge in visual understanding. In GQA, around 94% of questions need multi-hop reasoning, and 51% questions are about the relationships between objects. Following Hudson & Manning (2019a), the main evaluation metrics used in our experiments are accuracy, consistency, plausibility, and validity. are used to encode both questions and function keywords with 300 dimensions. The total vocabulary size is 3761, including all the functions, objects, and attributes. For training, we first use the 22M unbalanced "all-split" to bootstrap our model with a mini-batch size 2048 for 3-5 epochs, then fine-tune on the "balanced-split" with a mini-batch size 256. The testdev-balanced split is used for selecting the best model. We report our experimental on the test2019 split (from the public GQA leaderboard) in Table 1. First, we observe significant performance gain from MMN over NMN (b), which demonstrates the effectiveness of the proposed meta module mechanism. Further, we observe that our model outperforms the VQA state-of-the-art monolithic model MCAN by a large margin, which demonstrates the strong compositionality of our module-based approach. Overall, our single model achieves competitive performance (tied top 2) among published approaches. Notably, we achieve the same performance as LXMERT , which is pre-trained on large-scale out-of-domain datasets. The performance gap with NSM (b) is debatable since our model is self-contained without relying on well-tuned external scene graph generation model (; ;). To verify the contribution of each component in MMN, we perform several ablation studies: w/o Module Supervision vs. w/ Module Supervision. We investigate the influence of module supervision by changing the hyper-parameter η from 0 to 2.0. Attention Supervision vs. Guideline: We investigate different module supervision strategies, by directly supervising multi-head attention in multi-modal fusion stage (Figure 1). Specifically, we supervise different number of heads or the mean/max over different heads. w/o Bootstrap vs w/ Bootstrap: We investigate the effectiveness of bootstrapping in training to validate the influence of pre-training on the final model performance. Results are summarized in Table 2. From Ablation, we observe that without module supervision, our MMN achieves decent performance improvement over MCAN, but with much fewer parameters. By increasing η from 0.1 to 0.5, accuracy steadily improves, which reflects the importance of module supervision. Further increasing the value of η did not improve the performance empirically. From Ablation, we observe that directly supervising the attention weights in different Transformer heads only yields marginal improvement, which justifies the effectiveness of the implicit regularization in MMN. From Ablation, we observe that bootstrapping is an important step for MMN, as it explores more data to better regularize functionalities of reasoning modules. It is also observed that the epoch number of bootstrap also influences the final model performance. Choosing the optimal epoch size can lead to a better initialization for the following fine-tuning stage. Figure 5: Visualization of the inferential chains learned by our model. To demonstrate the interpretability of MMN, Figure 5 provides some visualization to show the inferential chain during reasoning. As shown, the model correctly executes the intermediate and yields the correct final answer. To better interpret the model's behavior, we also perform quantitative analysis to diagnose the errors in the inferential chain. Here, we held out a small validation set to analyze the execution accuracy of different functions. Our model obtains Recall@1 of 59% and Recall@2 of 73%, which indicates that the object selected by the symbolic teacher has 59% chance of being top-1, and 73% chance as the top-2 by the student model, significantly higher than random-guess Recall@1 of 2%, demonstrating the effectiveness of module supervision. Furthermore, we conduct detailed analysis on function-wise execution accuracy to understand the limitation of MMN. Results are shown in Table 3. Below are the observed main bottlenecks: (i) relation-type functions such as relate, relate inv; and (ii) object/attribute recognition functions such as query name, query color. We hypothesize that this might be attributed to the quality of visual features from standard object detection models , which does not capture the relations between objects well. Besides, the object and attribute classification network is not fine-tuned on GQA. This suggests that scene graph modeling for visual scene understanding is critical to surpassing NSM (b) on performance. To demonstrate the generalization ability of the meta module, we perform additional experiments to validate whether the recipe representation can generalize to unseen functions. Specifically, we held out all the training instances containing verify shape, relate name, choose name to quantitatively measure model's on these unseen functions. Standard NMN (b) fails to handle these unseen functions, as it requires training instances for the randomly initialized shallow network for these unseen functions. In contrast, MMN can transform the unseen functions Table 4 shows that the zero-shot accuracy of the proposed meta module is significantly higher than NMN (equivalent to random guess), which demonstrates the generalization ability of MMN and validate the extensibility of the proposed recipe encoding. Instead of handcrafting new modules every time when new functional semantics comes in like NMN (b), our MMN is more flexible and extensible for handling growing function sets under incremental learning. Monolithic Networks: Most monolithic networks for visual reasoning resort to attention mechanism for multimodal fusion;;;; ). To realize multi-hop reasoning on complex questions, SAN , MAC and MuRel models have been proposed. However, their reasoning procedure is built on a general-purpose reasoning block, which can not be disentangled to perform specific tasks, ing in limited model interpretability and compositionality. Neural Module Networks: By parsing a question into a program and executing the program through dynamically composed neural modules, NMN excels in interpretability and compositionality by design (a; ; b; ; ; ;). However, its success is mostly restricted to the synthetic CLEVR dataset, whose performance can be surpassed by simpler methods such as relational network and FiLM . Our MMN is a module network in concept, thus possessing high interpretability and compositionality. However, different from traditional NMN, MMN uses only one Meta Module for program execution recurrently, similar to an LSTM cell in Recurrent Neural Network. This makes MMN a monolithic network in practice, which ensures strong empirical performance without sacrificing model interpretability. State of the Art on GQA: GQA was introduced in Hudson & Manning (2019a) for real-world visual reasoning. Simple monolithic networks ), MAC netowrk (, and language-conditioned graph neural networks have been developed for this task. LXMERT , a large-scale pre-trained encoder, has also been tested on this dataset. Recently, Neural State Machine (NSM) (b) proposed to first predict a probabilistic scene graph, then perform multi-hop reasoning over the graph for answer prediction. The scene graph serves as a strong prior to the model. Our model is designed to leverage dense visual features extracted from object detection models, thus orthogonal to NSM and can be enhanced with their scene graph generator once it is publicly available. Different from the aforementioned approaches, MMN also performs explicit multi-hop reasoning based on predicted programs, so the inferred reasoning chain can be directly used for model interpretability. In this paper, we propose Meta Module Network that bridges the gap between monolithic networks and traditional module networks. Our model is built upon a Meta Module, which can be instantiated into an instance module performing specific functionalities. Our approach significantly outperforms baseline methods and achieves comparable performance to state of the art. Detailed error analysis shows that relation modeling over scene graph could further boost MMN for higher performance. For future work, we plan to incorporate scene graph prediction into the proposed framework. A APPENDIX The visual encoder and multi-head attention network is illustrated in Figure 6 and Figure 7, respectively.̂#̂$̂%̂& Figure 6: Illustration of the Visual Encoder described in Section. 2.1. The recipe embedder is illustrated in Figure 8. Figure 8: Illustration of the recipe embedder. The function statistics is listed in Table 5. The detailed function descriptions are provided in Figure 9. More inferential chains are visualized in Figure 10 and Figure 11. | We propose a new Meta Module Network to resolve some of the restrictions of previous Neural Module Network to achieve strong performance on realistic visual reasoning dataset. | 693 | scitldr |
We propose a new perspective on adversarial attacks against deep reinforcement learning agents. Our main contribution is CopyCAT, a targeted attack able to consistently lure an agent into following an outsider's policy. It is pre-computed, therefore fast inferred, and could thus be usable in a real-time scenario. We show its effectiveness on Atari 2600 games in the novel read-only setting. In the latter, the adversary cannot directly modify the agent's state -its representation of the environment- but can only attack the agent's observation -its perception of the environment. Directly modifying the agent's state would require a write-access to the agent's inner workings and we argue that this assumption is too strong in realistic settings. We are interested in the problem of attacking sequential control systems that use deep neural policies. In the context of supervised learning, previous work developed methods to attack neural classifiers by crafting so-called adversarial examples. These are malicious inputs particularly successful at fooling deep networks with high-dimensional input-data like images. Within the framework of sequential-decision-making, previous works used these adversarial examples only to break neural policies. Yet the attacks they build are rarely applicable in a real-time setting as they require to craft a new adversarial input at each time step. Besides, these methods use the strong assumption of having a write-access to what we call the agent's inner state -the actual input of the neural policy built by the algorithm from the observations-. When taking this assumption, the adversary -the algorithm attacking the agent-is not placed at the interface between the agent and the environment where the system is the most vulnerable. We wish to design an attack with a more general purpose than just shattering a neural policy as well as working in a more realistic setting. Our main contribution is CopyCAT, an algorithm for taking full-control of neural policies. It produces a simple attack that is: targeted towards a policy, i.e., it aims at matching a neural policy's behavior with the one of an arbitrary policy; only altering observation of the environment rather than complete agent's inner state; composed of a finite set of pre-computed state-independent masks. This way it requires no additional time at inference hence it could be usable in a real-time setting. We introduce CopyCAT in the white-box scenario, with read-only access to the weights and the architecture of the neural policy. This is a realistic setting as prior work showed that after training substitute models, one could transfer an attack computed on these to the inaccessible attacked model . The context is the following: We are given any agent using a neuralnetwork for decision-making (e.g., the Q-network for value-based agents, the policy network for actor-critic or imitation learning methods) and a target policy we want the agent to follow. The only thing one can alter is the observation the agent receives from the environment and not the full input of the neural controller (the inner state). In other words, we are granted a read-only access to the agent's inner workings. In the case of Atari 2600 games, the agents builds its inner state by stacking the last four observations. Attacking the agent's inner state means writing in the agent's memory of the last observations. The computed attack should be inferred fast enough to be used in real-time. We stress the fact that targeting a policy is a more general scheme than untargeted attacks where the goal is to stop the agent from taking its preferred action (hoping for it to take the worst). It is also more general than the targeted scheme of previous works where one wants the agent to take its least preferred action or to reach a specific state. In our setting, one can either hard-code or train a target policy. This policy could be minimizing the agent's true reward but also maximizing the reward for another task. For instance, this could mean taking full control of an autonomous vehicle, possibly bringing it to any place of your choice. We exemplify this approach on the classical benchmark of Atari 2600 games. We show that taking control of a trained deep RL agent so that its behavior matches a desired policy can be done with this very simple attack. We believe such an attack reveals the vulnerability of autonomous agents. As one could lure them into following catastrophic behaviors, autonomous cars, robots or any agent with high dimensional inputs are exposed to such manipulation. This suggests that it would be worth studying new defense mechanisms that could be specific to RL agents, but this is out of the scope of this paper. In Reinforcement Learning (RL), an agent interacts sequentially with a dynamic environment so as to learn an optimal control. To do so, the problem is modeled as a Markov Decision Process. It is a tuple {S, A, P, r, γ} with S the state space, A the action space we consider as finite in the present work, P the transition kernel defining the dynamics of the environment, r a bounded reward function and γ ∈ a discount factor. The policy π maps states to distributions over actions: π(·|s). The (random) discounted return is defined as G = t≥0 γ t r t. The policy π is trained to maximize the agent expected discounted return. The function V π (s) = E π [G|s 0 = s] denotes the value function of policy π (where E π [·] denotes the expectation over all possible trajectories generated by policy π). We also call µ 0 the initial state distribution and ρ(π) = E s∼µ0 [V π (s)] the expected cumulative reward starting from µ 0. Value-based algorithms use the value function, or more frequently the action-value function Q π (s, a) = E π [G|s 0 = s, a 0 = a], to compute π. To handle large state spaces, deep RL uses deep neural networks for function approximation. For instance, value-based deep RL parametrize the action-value function Q ω with a neural network of parameters ω and deep actor-critics directly parametrize their policy π θ with a neural network of parameters θ. In both cases, the taken action is inferred by a forward-pass in a neural network. Adversarial examples were introduced in the context of supervised classification. Given a classifier C, an input x, a bound on a norm., an adversarial example is an input x = x + η such that C(x) = C(x) while x − x ≤. Fast Gradient Sign Method (FGSM) is a widespread method for generating adversarial examples for the L ∞ norm. From a linear approximation of C, it computes the attack η as: with l(θ, x, y) the loss of the classifier and y the true label. As an adversary, one wishes to maximize the loss l(θ, x + η, y) w.r.t. η. Presented this way, it is an untargeted attack. It pushes C towards misclassifying x in any other label than y. It can easily be turned into a targeted attack by, instead of l(θ, x + η, y), optimizing for −l(θ, x + η, y target) with y target the label the adversary wants C to predict for x. This attack, optimized for the L ∞ norm can also be turned into an L 2 attack by taking: As shown by Eq. equation 1 and Eq. equation 2, these attacks are computed with one single step of gradient, hence the term "fast". These two attacks can be turned into -more efficient, yet sloweriterative methods by taking several successive steps of gradients. These methods will be referred to as iterative-FGSM. When using deep networks to compute its policy, an RL agent can be fooled the same way as a supervised classifier. As a policy can be seen as a mapping S → A, untargeted FGSM can be applied to a deep RL agent to stop it from taking its preferred action: a * = arg max a∈A π(a|s). Similarly targeted FGSM can be used to lure the agent into taking a specific action. Yet, this would mean having to compute a new attack at each time step, which is generally not feasible in a real-time setting. Moreover, with this formulation, it needs to directly modify the agent's inner state, the input of the neural policy, which is a strong assumption. In this work, we propose CopyCAT. It is an attack whose goal is to lure an agent into having a given behavior, the latter being specified by another policy. CopyCAT's goal is not only to lure the agent into taking specific actions but to fully control its behavior. Formally, CopyCAT is composed of a set of additive masks ∆ = {δ i} 1≤i≤|A| than can be used to drive a policy π to follow any policy π target. Each additive mask δ i is pre-computed to lure π into taking a specific action a i when added to the current observation regardless of the content of the observation. It is, in this sense, a universal attack. CopyCAT is an attack on raw observations and, as ∆ is pre-computed, it can be used online in a real-time setting with no additional computation. Notations We denote π the attacked policy and π target the target policy. At time step t, the policy π outputs an action a t taking the state s t as input. The agent state is internally computed from the past observations and we denote f the observations-to-state function: Data Collection In order to be pre-computed, CopyCAT needs to gather data from the agent. By watching the agent interacting with the environment, CopyCAT gathers a dataset D of K episodes made of observations: We recall that the objective in this setting is for CopyCAT to work with a read-only access to the inner workings of the agent. We thus stress that D is made of observations rather than states. If CopyCAT is successful, π is going to behave as π target and thus may experience observations out of the distribution represented in D. Yet, as will be shown, CopyCAT transfers to unseen observations. We hypothesize that, as we build a universal attack, the learned attack is able to move the whole support of observations in a region of R N where π chooses a precise action. Training A natural strategy for building an adversarial example targeted towards labelŷ is the following. Given a classifier P(y|x) parametrized with a neural network and an input example x, one computes the adversarial examplex = x + δ by maximizing log P(ŷ|x) subject to the constraint: The adversary then performs either one step of gradient (FGSM) or uses an iterative method to solve the optimization problem. Instead, CopyCAT is built for its masks to be working whatever the observation it is applied to. For each a i ∈ A we build δ i, the additional masks luring π into taking action a i, by maximizing over δ i: We restrict the method to the case where f, the function building the agent's inner state from the observations, is differentiable. Eq. 3 is optimized by alternating between gradient steps with adaptive learning rate and projection steps onto the L ∞ -ball of radius. Unlike FGSM, CopyCAT is a full optimization method. It does not take one single step of gradient. CopyCAT has two main parameters: ∈ R +, a hard constraint on the L ∞ norm of the attack and α ∈ R +, a regularization parameter on the L 2 norm of the attack. Inference Once ∆ is computed, the attack can be used on π to make it follow any policy π target. At each time step t and given past observations, π target infers an action a target t given the sequence of observations and the corresponding mask δ a target t ∈ ∆ is applied to the last observation o t before being passed to the agent. Vulnerabilities of neural classifiers were highlighted by and several methods were developed to create the so-called adversarial examples, maliciously crafted inputs fooling deep networks. In sequential-decision-making, previous works use them to attack deep reinforcement learning agents. However these attacks are not always realistic. The method from uses fast-gradient-sign method , for the sole purpose of destroying the agent's performance. What's more, it has to craft a new attack at each time step. This implies backpropagating through the agent's network, which is not feasible in real-time. Moreover, it modifies directly the inner state of the agent by writing in its memory, which is a strong assumption to take on what component of the agent can be altered. The approach of allows the number of attacked states to be divided by four, yet it uses the heavy optimization scheme from for crafting their adversarial examples. This is, in general, not doable in a real-time setting. They also take the same strong assumption of having a read & write-access to the agent's inner workings. To the best of our knowledge, they are the first to introduce a targeted attack. However, the setting is restricted to targeting one dangerous state. proposes a method to lure the agent into taking its least preferred action in order to reduce its performance but still uses computationally heavy iterative methods at each time step. proposed an adversarial method for robust training of agents but only considered attacks on the dynamic of the environment, not on the visual perception of the agent. and developed adversarial environment generation to study agent's generalization and worst-case scenarios. Those are different from this present work where we enlighten how an adversary might take control of a neural policy. We wish to build an attack targeted towards the policy π target. At a time step t, the attack is said to be successful if π under attack indeed chooses the targeted action selected by π target. When π is not attacked, the attack success rate corresponds to the agreement rate between π and π target, measuring how often the policies agree along an unattacked trajectory of π. Note that we only deal with trained policies and no learning of neural policies is involved. In other words, π and π target are trained and frozen policies. What we really want to test is the ability of CopyCAT to lure π into having a specific behavior. For this reason, measuring the attack success rate is not enough. Having a high success rate does not necessarily mean the macroscopic behavior of the attacked agent matches the desired one as will be shown further in this section. Cumulative reward as a proxy for behavior We design the following setup. The agent has a policy π trained with DQN . The policy π target is trained with Rainbow . We select Atari games with a clear difference in terms of performance between the two algorithms (where Rainbow obtains higher average cumulative reward than DQN). This way, in addition to measuring the attack success rate, we can compare the cumulative reward obtained by π under attack ρ(π) to ρ(π target) as a proxy of how well π's behavior is matching the behavior induced by π target. In this setup, if the attacked policy indeed gets cumulative rewards as high as the ones obtained by π target, it will mean that we do not simply turned some actions into other actions we targeted, but that the whole behavior induced by π under attack matches the one induced by π target. This idea that, in reinforcement learning, cumulative reward is the right way to monitor an agent's behavior has been used and developed by the inverse reinforcement learning literature. Authors from argued that the value of a policy, i.e. its cumulative reward, is the most compact, robust and transferable description of its induced behavior. We argue that measuring cumulative reward is thus a reasonable proxy for monitoring the behavior of π under attack. At this point, we would like to carefully defuse a possible misunderstanding. Our goal is not to show that DQN's performance can be improved by being attacked. We simply want to show that its behavior can be fully manipulated by an opponent and we use the obtained cumulative reward as a proxy for the behavior under attack. Baseline We set the objective of building a real-time targeted attack. We thus need to compare our algorithm to baselines applicable ot this scenario. The fastest state-of-the-art method can be seen as a variation of. It applies targeted FGSM at each time step t to compute a new attack. It first infers the action a target and then back-propagates through the attacked network to compute their attack. CopyCAT only infers a target and then applies the corresponding pre-computed mask. Both methods can thus be considered usable in real-time yet CopyCAT is still faster at inference. We set the objective of attacking only observations rather than complete states so we do not need a write-access to the agent's inner workings. DQN stacks four consecutive frames to build its inner state. We thus compare CopyCAT to a version of the method from where the gradient inducing the FGSM attack is only computed w.r.t the last observation, so it produces an attack comparable to CopyCAT, i.e., on a single observation. The gradient from Eq. 1: ∇ st l(θ, s t, a target) becomes ∇ ot l(θ, f (o t, o 1:t−1), a target ). To keep the comparison fair, a target is always computed with the exact same policy π target as in CopyCAT. FGSM-L ∞ has the same parameter as CopyCAT, bounding the L ∞ norm of the attack. CopyCAT has an additional regularization parameter α allowing the attack to have, for a same, a lower energy and thus be less detectable. We will compare CopyCAT to the attack from showing how behaviors of π under attacks match π target when these attacks are of equal energy. Full optimization-based attacks would not be inferred fast enough to be used in a sequential decision making problem at each time step. Experimental setup We always turn the sticky actions on, which make the problem stochastic . An attacked observation is always clipped to the valid image range, 0 to 255. For Atari games, DQN uses as its inner state a stack of four observations: For learning the masks of ∆, we gather trajectories generated by π in order to fill D with 10k observations. We use a batch size of 8 and the Adam optimizer with a learning rate of 0.05. Each point of each plot is the average over 5 policy π seeds and 80 runs for each seed. Only one seed is used for π target to keep comparison in terms of cumulative reward fair. CopyCAT has an extra parameter α, we test its influence on the L 2 norm of the produced attack. For a given, FGSM-L ∞ computes an attack η of maximal energy. As given by Eq. 1, its L 2 norm is η 2 = √ N 2 with N the input dimension. For a given, CopyCAT produces |A| masks. We show in Fig. 1 the largest L 2 norm of the |A| masks for a varying α (plain curves) and compare it to the norm of the FGSM-L ∞ attack (dashed lines). We want to stress that the attacks are agnostic to the training algorithm so the are easily transferred to other agents using neural policies trained with another algorithm. As can be seen on Fig. 1, for a given and for the range of tested α, the attack produced by CopyCAT has lower energy than FGSM-L ∞. This is especially significant for higher values of, e.g higher than 0.05. Influence of parameters over the ing behavior We wish to show how the agent behaves under attack. As explained before, this analysis is twofold. First, we study in terms of attack success rate -rate of action chosen by π matching a target when shown attacked observations-as done in supervised learning. Second, we study the behavior matching through the cumulative rewards under attack ρ(π). What we wish to verify in the following experiment is CopyCAT's ability to lure an agent into following a specific behavior. If the attack success rate is high (close to 1), we know that, on a supervised-learning perspective, our attack is successful: it lures the agent into taking specific actions. If, in addition, the average cumulative reward obtained by the agent under attack reaches ρ(π target) it means that the attack is really successful in terms of behavior. We recall that we attack a policy with a target policy reaching higher average cumulative reward. We show on Fig. 2 and 3 (two different games) the attack success rate (left) and the cumulative reward (right) for CopyCAT (plain curves) for different values of the parameters α and, as well as for unattacked π (green dashed line) and π target (black dashed lines). We observe a gap between having a high success rate and forcing the behavior of π to match the one of π target. There seems to exist a threshold corresponding to the minimal success rate required for the behaviors to match. For example, as seen on the left, CopyCAT with = 5 and α < 10 −5 (green curve) is enough to get a 85% success rate on the attack. However, as seen on the right, it is not enough to get the behavior of π under attack to match the one of the target policy as the reward obtained under attack never reaches ρ(π target). Overall, we observe on Fig. 2-right and Fig. 3 -right that with high enough ≥ 0.04 and α < 10 −6, CopyCAT is able to consistently lure the agent into following the behaviour induced by π target. Comparison to We compare CopyCAT to the targeted version of FGSM on a setup where the gradient is computed only on the last observation. As in the last paragraph, we study both the attack success rate and the average cumulative reward under attack. We ask the question: is CopyCAT able to lure the agent into following the targeted behavior? Is it better at this task than FGSM in the real-time and read-only setting? We show on Fig. 4 and 5 (two different games) the success rate of CopyCAT and FGSM (y-axis, left) and the average cumulative reward under attack (y-axis, right). These values are plotted (i) against the L 2 norm of the attack for FGSM and (ii) against the largest L 2 norm of the masks: max i δ i 2 for CopyCAT. We only plot the standard deviation on the attack success rate because it corresponds to the intrinsic noise of CopyCAT. We do not plot it for cumulative reward for the reason that one seed of π target has a great variance (with the sticky actions) and matching π target, even perfectly, implies matching the variance of its cumulative rewards. The same phenomenon can be observed on Fig. 2 and 3: CopyCAT is not itself unstable (left figures, when α decreases or increases, the rate of successful attacks consistently increases). Yet the cumulative reward is noisier, as the behavior of π is now matching with a high-variance policy. As observed on Fig. 4-left and Fig. 5 -left, FGSM is able to turn a potentially significant part of the taken actions into the targeted actions (maximal success rate around 75% on Space Invaders). However, it is never able to make π's behavior match with π target's behavior as seen on Fig. 4-right and Fig. 5 -right. The average cumulative reward obtained by π under FGSM attack never reaches the one of π target. On the contrary, CopyCAT is able to successfully lure π into following the desired macroscopic behavior. First, it turns more than 99% of the taken actions into the targeted actions. Second, it makes ρ(π) under attack reach ρ(π target). Moreover, it does so using only a finite set of masks while the baselines compute a new attack at each time step. An example of CopyCAT is shown on Fig. 6. The patch δ i aiming at action "no-op" (i.e. do nothing) is applied to an agent playing Space Invaders. The patch itself can be seen on the right (gray represents a zero pixel, black negative and white positive). On the left, the unattacked observation. In the middle, the attacked observation. Below the images, the action taken by the same policy π when shown the different situations in an online setting. In this work, we built and showed the effectiveness of CopyCAT, a simple algorithm designed to attack neural policies in order to manipulate them. We showed its ability to lure a policy into having a desired behavior with a finite set of additive masks, usable in a real-time setting while being applied only on observations of the environment. We demonstrated the effectiveness of these universal masks in Atari games. As this work shows that one can easily manipulate a policy's behavior, a natural direction of work is to develop robust algorithms, either able to keep their normal behaviors when attacked or to detect attacks to treat them appropriately. Notice however that in a sequential-decisionmaking setting, detecting an attack is not enough as the agent cannot necessarily stop the process when detecting an attack and may have to keep outputting actions for incoming observations. It is thus an exciting direction of work to develop algorithm that are able to maintain their behavior under such manipulating attacks. Another interesting direction of work in order to build real-life attacks is to test targeted attacks on neural policies in the black-box scenario, with no access to network's weights and architecture. However, targeted adversarial examples are harder to compute than untargeted ones and we may experience more difficulties in reinforcement learning than supervised learning. Indeed, learned representations are known to be less interpretable and the variability between different random seeds to be higher than in supervised learning. Different policies trained with the same algorithm may thus lead to S → A mappings with very different decision boundaries. Transferring targeted examples may not be easy and would probably require to train imitation models to obtain mappings similar to π in order to compute transferable adversarial examples. In order to keep the core paper not too long, we only showed a subset of the in Sec. 5, they are all provided here. The explanations and interpretations can be found in Sec. 5. Left: HERO. Center: Space Invaders. Right: Air Raid. In this appendix, we provide additional experiments to study further various aspects of the proposed approach. In Sec. 5, the attacked agent was a trained DQN agent, while the target policy was a trained Rainbow agent. If these agents have clearly different behaviors, one could argue that they were initially trained to solve the same task (Rainbow achieving better ). To further assess CopyCAT's ability to lure a policy into following another policy, we therefore attack an untrained DQN, with random weights, to follow the policy π target (still obtained from a trained Rainbow agent). Left: HERO. Center: Space Invaders. Right: Air Raid. We see that in this case, FGSM is able to lure π into following π target at least as well as CopyCAT. This shows that it is easier to fool an untrained network than a trained one. As expected, trained networks are more robust to adversarial examples. CopyCAT is also able to lure the agent into following π target. B.2 observed the transferability of adversarial examples between different models. Thanks to this transferability, one is able to attack a model without having access to its weights. By learning attacks on proxy models, one can build black-box adversarial examples. enlightened the difficulty for the state-of-the-art methods to build targeted adversarial examples in this black-box setting. Starting from the intuition that universal attacks may transfer better between models, we enhanced CopyCAT for it to work in the black-box setting. We consider a setting where the adversary (i) is given a set of proxy models {π 1, ..., π n} trained with the same algorithm as π, (ii) can also query the attacked model π, but (iii) has no access to its weights. In the black-box setting, CopyCAT is divided into two steps: training multiple additional masks and selecting the highest performing ones. Training The ensemble-based method from computes its additional mask by attacking the classifier given by the mean predictions of the proxy models. We instead consider that our attack should be efficient against any convex combination of the proxy models' predictions. For each action a i, we compute the mask δ i by maximizing over 100 epochs on the dataset D: with ∆ the uniform distribution over the n-simplex. For each action, 100 masks are computed this way. These masks are just computed with different random seeds. Selection We then compute a competition accuracy for each of these random seeds. This accuracy is computed by querying π on states built as follows. We take four consecutive observations in D, apply 3 masks randomly selected among the previously computed masks on the first 3 observations; the mask δ i that is actually being tested is applied on the last observation. The attack is considered successful if π outputs the action a i corresponding to δ i. For each action, the mask with the highest competition accuracy among the 100 computed masks is selected. The selected masks are then used online as in the white-box setting. Results We provide preliminary for the considered black-box setting. Four proxy models of DQN are used to attack π. Again, it is attacked to make it follow the policy π target given by Rainbow. The can be found in Fig. 8. Each dot is an attack tested over 80 new episodes. Y-axis is the mean success rate (middle) or the cumulative reward (right). X-axis is the maximal norm of the attack. The figure on the left gives the value of α (on the y-axis) corresponding to each color. We can observe that the proposed black-box attack is effective, even if less efficient than its white-box counterpart. The proposed black-box CopyCAT could certainly be improved, and we let this for future work. Reinforcement learning led to great improvements for games or robots manipulation but is not able yet to tackle realistic-image environments. While this paper is focused on weaknesses of reinforcement learning agents, the relevance of the proposed method would be diminished if one could not compute universal adversarial examples on realistic datasets. We thus present this proof-of-concept, showing the existence of universal adversarial examples on ImageNet . Note that already showed the existence of universal attacks but considers a patch covering a part of the image rather than an additional mask. We computed a universal attack on VGG16 , targeted towards the label "tiger shark", the same way CopyCAT does. It is trained on a small training set (10 batches of size 8), and tested on a random subset of ImageNet validation dataset. The network is taken from Keras pretrained models and attacked in a white-box setting. The same procedure as CopyCAT is used. 1000 images are randomly selected in the validation set. Only 80 are used for training and the rest is used for testing. The attack is trained with the same loss, same learning rate and same batch size as CopyCAT, for 200 epochs. The (rescaled) computed attack is shown in Fig. 9. Examples of attacked images from the test set are visible on Fig. 10. After 200 epochs, the train accuracy is 90% and the test accuracy 88.44%. This proof-of-concept experiment validates the existence of universal adversarial examples on realistic images and shows that CopyCAT's scope is not reduced to Atari-like environments. More generally, the existence of adversarial examples have been shown to be a property of high-dimensional manifolds . Going towards more realistic images, hence higher dimensional images, should on the opposite, allow CopyCAT to more easily find universal adversarial examples. | We propose a new attack for taking full control of neural policies in realistic settings. | 694 | scitldr |
Cold-start and efficiency issues of the Top-k recommendation are critical to large-scale recommender systems. Previous hybrid recommendation methods are effective to deal with the cold-start issues by extracting real latent factors of cold-start items(users) from side information, but they still suffer low efficiency in online recommendation caused by the expensive similarity search in real latent space. This paper presents a collaborative generated hashing (CGH) to improve the efficiency by denoting users and items as binary codes, which applies to various settings: cold-start users, cold-start items and warm-start ones. Specifically, CGH is designed to learn hash functions of users and items through the Minimum Description Length (MDL) principle; thus, it can deal with various recommendation settings. In addition, CGH initiates a new marketing strategy through mining potential users by a generative step. To reconstruct effective users, the MDL principle is used to learn compact and informative binary codes from the content data. Extensive experiments on two public datasets show the advantages for recommendations in various settings over competing baselines and analyze the feasibility of the application in marketing. With the explosion of e-commerce, most customers are accustomed to receiving a variety of recommendations, such as movies, books, news, or hotels they might be interested in. Traditional recommender systems just recommended items that are similar to what they liked or rated in the previous. Recommendations help users find their desirable items, and also creates new revenue opportunities for vendors, such as Amazon, Taobao, eBay, etc. Among them, one of the most popular recommendation methods, collaborative filtering is dependent on a large amount of user-item interactive information to provide an accurate recommendation. However, most of new e-commerce vendors do not have enough interactive data, which leads to low recommendation accuracy, i.e., cold-start issues. Previous studies on cold-start issues generally modeled as a combination of collaborative filtering and content filtering, known as hybrid recommender systems. Specifically, they learned real latent factors by incorporating the side information into the interactive data. Such as Collaborative Deep Learning (CDL) , Visual Bayesian Personalized Ranking (VBPR) , Collaborative Topic modeling for Recommedation (CTR) , and the DropoutNet for addressing cold start (DropoutNet) , ABCPRec for Bridging Consumer and Producer Roles for User-Generated Content Recommendation (ABCPRec) . All of the above hybrid recommender systems were modeled in real latent space, which leads to low efficiency for the online recommendation with the increasing scale of datasets. discrete objectives. Thus many scholars learned binary codes by some approximate techniques, such as the two-stage hashing learning method utilized in Preference Preserving Hashing(PPH) and the Iterative Quantization(ITQ) . To reduce information loss, two learning-based hashing frameworks: bit-wise learning and block-wise learning were respectively proposed in hashing based recommendation frameworks (; ; . However, due to the requirement of binary outputs for learning-based hashing frameworks, the training procedure is expensive for large-scale recommendation, which motivates us to propose a generative approach to learn hash functions. In this paper, we propose the collaborative generated hashing(CGH) to learn hash functions of users and items from content data with the principle of Minimum Description Length (MDL) . In marketing area, mining potential customers is crucial to the e-commerce. CGH provides a strategy to discover potential users by the generative step. To reconstruct effective users, uncorrelated and balanced limits are imposed to learn compact and informative binary codes with the principle of the MDL. Especially, discovering potential customers is vital to the success of adding new items for a recommendation platform . Specifically, for a new item, we can generate a new potential user by the generative step (detailed in Section 2.1), and then search the nearest potential users in the user set. By recommending a new product to the potential users who might be interested in but didn't plan to buy, further e-commerce strategies can be developed to attract those potential users. We organize the paper as follows: Section 2 introduce the main techniques of CGH. We first introduce the framework of CGH and compare it with the closely related competing baselines: CDL and DropoutNet ; we then formulate the generative step in Section 2.1 and the inference step in Section 2.2, respectively; we finally summarize the training objective and introduce the optimization in Section 2.3. Particularly, we demonstrate the process of mining potential users for the marketing application in Section 2.1. Section 3 presents the experimental for marketing analysis and recommendation accuracy in various settings. Section 4 concludes the paper. The main contributions of this paper are summarized as follows: We propose the Collaborative Generated Hashing (CGH) with the principle of MDL to learn compact but informative hash codes, which applies to various settings for recommendation. We provides a marketing strategy by discovering potential users by the generative step of CGH, which can be applied to boost the e-commence development. We evaluate the effectiveness of the proposed CGH compared with the state-of-the-art baselines, and demonstrate its robustness and convergence properties on the public datasets. The framework of the proposed CGH is shown in Fig. 1(c), where U, V and R are respectively observed user content, item content and rating matrix. B and D are binary codes of users and items, respectively. CGH consists of the generative step marked as dashed lines and the inference step denoted by solid lines. Once training is finished, we fix the model and make forward passes to obtain binary codes B and D through the inference step, and then conduct recommendation. For the marketing application, we create a new user via the generative step. In comparison of CGH with the closely related baseline CDL , the proposed CGH aims to learn binary codes instead of real latent vectors P and Q due to the advantage of hashing for online recommendation; plus the CGH optimizes an objective with the principle of MDL, while CDL optimized the joint objective of rating loss and item content reconstruction error. In comparison of CGH with DropoutNet , CGH can be used as a marketing strategy by discovering potential users; plus CGH learns hash functions by stacked denoising autoendoer, while DropoutNet obtained real latent factors by the standard neural network. In the following we start by first formulating the generative process and demonstrating the application in marketing area; we then formulate the inference step; we finally summarize the training objective and the optimization method.. Give a sparse rating matrix R and item content data V ∈ R dv, where d v is the dimension of the content vector, and V is stacked by the bag-of-words vectors of item content in the item set V. most previous studies were focus on modeling deterministic frameworks to learn representations of items for item recommendation, such as CDL, CTR, DropoutNet, et.al. In this paper, we discover a new strategy from a perspective of marketing for item recommendation -mining potential users. We demonstrate the process of mining potential users by an item through the generative step in Fig. 2. After the inference step, the binary code of item j is available. By maximizing the similarity function δ(b i, d j) (detailed in Section 2.1), the optimal binary code b p is obtained. Then we generate the new user u p through the generative step. Finally we find out potential users from the user set by some nearest neighborhood algorithms, such as KNN. As a marketing strategy, it can discover potential users for both warm-start items and cold-start items. Thus, from a perspective of marketing, it can be regarded as another kind of item recommendation. Figure 2: Demonstration of mining potential users for an item j. After the inference step, d j is available, we first find out the most similar binary code b p; we then generate a new potential user u p by the generative process; we furtherly search the top-k nearest potential users from the user set by some nearest neighborhood algorithms (e.g., KNN). The generation process (also referred as decoding process) is denoted by dashed lines in Fig. 1 Fix binary codes b i and d j of the user i and the item j, the bag-of-words vector u i of the user i (v j of the item j) is generated via p(θ u). The ratings r ij is generated by b i and d j. We use a simple Gaussian distribution to model the generation of u i and v j given b i and d j like Stochastic Generative Hashing (SGH) : where, c uk ∈ R du is the codebook with r codewords, which is similar for C v, and d u is the dimension of the bag-of-the-words vector of users. The prior is modeled as the multivariate Bernoulli distribution on hash codes: p(b i) ∼ B(ρ u), and p(d j) ∼ B(ρ v), thus the prior probability are as follows: We formulate the rating with the similarity between binary codes of users and items like the most successful recommender systems, matrix factorization . Then the rating is thus drawn from the normal distribution centered at the similarity value, denotes the similarity between binary codes b i and d j. Hamdis(b i, d j) represents Hamming distance between the two binary vectors, which has been widely applied in hashing-based recommender system (; ;). C ij is the precision parameter that serves as confidence for r ij similar to that in CTR (C ij = a if r ij = 1 and C ij = b otherwise) due to the fact that r ij = 0 means the user i is either not interested in item j or not aware of it. With the generative model constructed, the joint probability of both observed ratings, content vectors and binary codes is given by The inference process (also referred as encoding process) shown in the Fig. 1 (c) with dashed lines, the binary latent variables b i (d j) depends on the content vector u i (v j) and the rating R (shadowed in Fig 1). Inspired by the recent work on generative hashing and DropoutNet , we use a multivariate Bernoulli distribution to model the inference process of b i and d j with linear parametrization, i.e.,. p i and q j are the of r-dimension matrix factorization dv+r are the transformation matrices of linear parametrization. From SGH , the MAP solution of the eq. is readily given by With the linear projection followed by a sign function, we can easily get hash codes of users and items. However, hashing with a simple sign function suffers from large information loss according to , which motivates us to add constraints on parameters in the inference step. To derive compact and informative hash codes for users and items, we add balanced and uncorrelated constraints in the inference step. The balanced constraint is proposed to maximize the entropy of each binary bit , and the uncorrelated constraint makes each bit is independent of others. Then we can obtain compact and informative hash codes by the following constraints, Balanced constraint: From the eq., b i and d j are only dependent on parameters T u and T v, respectively, thus we add constraints on T u and T v directly. So eq. is equivalent to the following constraints, By imposing the above constraints in the training step, compact and informative hash codes can be obtained through the inference process. Next we summarize the training objective and its optimization. Since our goal is to reconstruct users, items and ratings by using the least information of binary codes, we train the CGH with the MDL principle, which finds the best parameters that maximally compress the training data and meanwhile keep the information carried, thus CGH aims to minimize the expected amount of informations related to q: Maximizing the posterior probability is equivalent to maximizing L(q) by only considering the variational distribution of q(B, D), the objective becomes is the regularizer term with parameters Θ and Φ. By training the objective in eq., we obtain binary codes, but some bits probably be correlated. To minimize the reconstruction error, SGH had to set up the code length as long as r = 200. Our goal in this paper is to obtain compact and informative hash codes, thus we impose the balance and independent constraints on hash codes by eq.. Maximizing the eq. is transformed to minimizing the following constrained objective function of the proposed Collaborative Generative Hashing (CGH), The objective of CGH in eq. is a discrete optimization problem, which is difficult to optimize straightforwardly, so in the training stage, the tanh function is utilized to replace the sign function in the inference step, and then the continuous outputs are used as a relaxation of hash codes. With the relaxation, we train all components jointly with back-propagation. After training, we fix them and make forward passes to map the concatenate vectors inŨ andṼ to binary codes B and D, respectively. The recommendation in various settings is then conducted using B and D by the similarity score estimated as before δ(The training settings are dependent on the recommendation settings, i.e, warm-start, cold-start item, and cold-start user. L CGH (Θ, Φ) aims to minimize the rating loss and two content reconstruction errors with regularizers. (a.) For the warm-start recommendation, ratings for all users and items are available, then the above objective is trivially optimized by setting the content weights to 0 and learning hashing function with the observed ratings R. (b.) For the cold-start item recommendation, ratings for some items are missing, then the objective is optimized by setting the user content weight to 0 and learning parameters with the observed ratings R and item content V. (c.) The training setting for the cold-start user recommendation is similar to the cold-start item recommendation. We validate the proposed CGH on two public dataset: CiteUlike 1 and RecSys 2017 Challenge dataset 2 from the following two aspects: Marketing analysis. To validate the effectiveness of CGH in marketing area, we fist defined a metric to evaluate the accuracy of mining potential users; we then test the performance for warm-start item and cold-start item, respectively. Recommendation performance. We test the performance of CGH for recommendation in various settings including: warm-start, cold-start item, and cold-start user in terms of Accurcy@k . In the following, we first introduce the experimental settings, followed by the experimental analysis from the above aspects. To evaluate the power of finding out potential users and the accuracy of recommendation in different settings. For the CiteUlike dataset, it contains 5,551 users, 16,980 articles, 204,986 observed user-article binary interaction pairs, and articles abstract content. Similar to , we extract bag-of-the-words item vector with dimension d v = 8000 by ranking the TF-IDF values. For the RecSys 2017 Challenge dataset, it is the only publicly available datasets that contains both user and item content data enabling both cold-start item and cold-start user recommendation. It contains 300M user-item interactions from 1.5M users to 1.3M items and content data collected from the career oriented social network XING (Europern analog of LinkedIn). Like , we evaluate all methods on binary rating data, item content with dimension of d u = 831 and user content with the dimension of d v = 2738. user features and 2738 item features forming the dimensions of user and item content. We randomly split the binary interaction (rating) R into three disjoint parts: warm start ratings R w, cold-start user ratings R u, and cold-start item ratings R v, and R w is furtherly split into the training dataset R wt and the testing dataset R we. Correspondingly, the user and item content datasets are split into three disjoint parts. The randomly selection is carried out 5 times independently, and we report the experimental as the average values. The ultimate goal of recommendation is to find out the top-k items that users may be interested in. Accuracy@k was widely adopted by many previous ranking based recommender systems . Thus we adopt the ranking-based evaluation metric Accuracy@k to evaluate the quality of the recommended item ranking list. Metric for Marketing Application. For a new application of the recommender system, there haven't yet a metric to evaluate the marketing performance. Thus, in this paper, we define an evaluation metric similar to the ranking-based metric Accuracy@k used for the warm-start and cold-start recommendation in this paper. From Fig. 2, we discover the k nearest potential users for an item j. The basic idea of the metric is to test whether the user that really interested in an item appears in the k potential users list. For each positive rating (r ij = 1) in the testing dataset D test: we randomly choose 1000 negative users (users k with r kj = 0) and find k potential users in the 1001 user set; we check if the positive user i (with positive rating r ij = 1) appears in the k potential users list. If the answer is'yes' we have a'hit' and have a'miss' otherwise. The metric also denoted by Accuracy@k is formulated as: where |D test | is the size of the test set, and #hit@k denotes the number of hits in the test set. The experiments evaluate the performance of the marketing application in mining potential users for warm-start items on the test dataset R we and cold-start items on R v. Specifically, we first train the model on the training dataset R wt and the corresponding user and item content data. When the training is completed, we fix parameters and obtain hash codes b i and d j by making forward passes. Then we generate k potential users for items in the test dataset by the procedure demonstrated in Fig. 2, and evaluate the quality of the potential users list by Accuracy@k defined in Section 3.2. The marketing analysis for warm start item and cold-start item are reported in Fig. 3 (Left.), which shows the accuracy values varies with the numbers of potential users. It indicates the accuracy increases with the number of potential users for both cold-start and warm start settings. It's reasonable because mining more potential users will have greater accuracy value defined in Section 3.2. Especially, the proposed CGH is effective for cold-start item, which indicates further e-commerce strategies can be developed for new items to attract those potential users. Besides, from the perspective of marketing, warm-start recommendation and cold-start recommendation has less gap than traditional recommendation. Robust Testing. We evaluate the performance varies with the number of users who really interested in the target item in test set. The experimental shown in Fig. 3 (Center.) indicates the accuracy grows steadily with the size of test set, which reveals the CGH for marketing application is robust. Thus, it is practical to be used in the sparse and cold-start settings. Convergence of CGH. Fig. 3 (Right.) demonstrates the convergence of the proposed CGH, which reveals the reconstruction errors of ratings, users content, items content and the total error with the number of samples seen by CGH are converged, which furtherly validate the correction and effectiveness of the proposed CGH. Accuracy for warm-start Recommendation. Fig. 4 (Left.) shows the accuracy comparison of warm-start recommendation on CiteUlike dataset. In which collaborative generated embedding (CGE) denotes the real version of CGH. The figure shows the proposed CGH (CGE) has a comparable performance with other hybrid recommender systems. The proposed CGH is hashing-based recommendation, hence binary vectors apply to recommendation which has the advantage in online recommendation as introduced in Section 1; while the baselines are real-valued recommendations which conducts recommendation on real latent space. Due to real latent vectors intuitively carried more information than hash codes. Thus it is acceptable to have small gaps between the real-valued hybrid recommendation and the hashing-based recommendation. In addition, there is still small gap of the real version CGE in comparison with DropoutNet, because the reconstruction error is consid-ered in CGH(CGE), while DropoutNet didn't consider it. However, the reconstruction is significant in the generative step of CGH, which makes it feasible to mining effective potential users, thus CGH(CGE) has the advantage in marketing application. Accuracy for cold-start item recommendation. This experiment studies the accuracy comparison between competing hybrid recommender systems and CGH under the same cold-start item setting. We test the performance on the test dataset R v introduced in Section 3.1. Specifically, in R v each item (cold-start item) has less than 5 positive ratings. Then we select users with at least one positive rating as test users. For each test user, we first choose his/her ratings related to cold-start items as the test set, and the remaining ratings as the training set. Our goal is to test whether the marked-off cold-start items can be accurately recommended to the right user. The experimental for cold-start item recommendation are shown in Fig. 4 (Center.). We conclude that CGH has a comparable performance with competing baselines and achieves better performance than CTR. The evaluated by another metric MRR (detailed in Appendix. A) are similar. Accuracy for cold-start user recommendation. We also test the performance on the cold-start user setting on the test dataset R u introduced in Section 3.1. Specifically, in R u, each user (cold-start user) has less than 5 positive ratings. Then we select items with at least one positive rating as test items. For each test item, we first choose ratings related to cold-start users as the test set, and the remaining ratings as the training set. Our goal is to test whether the test item can be can be accurately recommended to marked-off user. Due to the fact that only Dropout can be applied to cold-start user recommendation, so we only compare the performance of CGH with Dropout. The experimental for cold-start user recommendation shown in Fig. 4 (Right.) indicates our proposed CGH has similar performance with DropoutNet. Besides, CGH has the advantage of the application in marketing area. In this paper, a generated recommendation framework called collaborative generated hashing (CGH) is proposed to address the cold-start and efficiency issues for recommendation. The two main contributions are put forward in this paper: we develop a collaborative generated hashing framework with the principle of Minimum Description Length together(MDL) with uncorrelated and balanced constraints on the inference process to derive compact and informative hash codes, which is significant for the accuracy of recommendation and marketing; we propose a marketing strategy by the proposed CGH, specifically, we design a framework to discover the k potential users by the generate step; we evaluate the proposed scheme on two the public datasets, the experimental show the effectiveness of the proposed CGH for both warm-start and cold-start recommendation. | It can generate effective hash codes for efficient cold-start recommendation and meanwhile provide a feasible marketing strategy. | 695 | scitldr |
Recent efforts to combine Representation Learning with Formal Methods, commonly known as the Neuro-Symbolic Methods, have given rise to a new trend of applying rich neural architectures to solve classical combinatorial optimization problems. In this paper, we propose a neural framework that can learn to solve the Circuit Satisfiability problem. Our framework is built upon two fundamental contributions: a rich embedding architecture that encodes the problem structure and an end-to-end differentiable training procedure that mimics Reinforcement Learning and trains the model directly toward solving the SAT problem. The experimental show the superior out-of-sample generalization performance of our framework compared to the recently developed NeuroSAT method. Recent advances in neural network models for discrete structures have given rise to a new field in Representation Learning known as the Neuro-Symbolic methods. Generally speaking, these methods aim at marrying the classical symbolic techniques in Formal Methods and Computer Science to Deep Learning in order to benefit both disciplines. One of the most exciting outcomes of this marriage is the emergence of neural models for learning how to solve the classical combinatorial optimization problems in Computer Science. The key observation behind many of these models is that in practice, for a given class of combinatorial problems in a specific domain, the problem instances are typically drawn from a certain (unknown) distribution. Therefore if a sufficient number of problem instances are available, then in principle, Statistical Learning should be able to extract the common structures among these instances and produce meta-algorithms (or models) that would, in theory, outperform the carefully hand-crafted algorithms. There have been two main approaches to realize this idea in practice. In the first group of methods, the general template of the solver algorithm (which is typically the greedy strategy) is directly imported from the classical heuristic search algorithm, and the Deep Learning component is only tasked to learn the optimal heuristics within this template. In combination with Reinforcement Learning, such strategy has been shown to be quite effective for various NP-complete problems -e.g. BID16. Nevertheless, the ed model is bounded by the greedy strategy, which is sub-optimal in general. The alternative is to go one step further and let Deep Learning figure out the entire solution structure from scratch. This approach is quite attractive as it allows the model not only learn the optimal (implicit) decision heuristics but also the optimal search strategies beyond the greedy strategy. However, this comes at a price: training such models can be quite challenging! To do so, a typical candidate is Reinforcement Learning (Policy Gradient, in specific), but such techniques are usually sample inefficient -e.g. BID4. As an alternative method for training, more recently BID24 have proposed using the latent representations learned for the binary classification of the Satisfiability (SAT) problem to actually produce a neural SAT solver model. Even though using such proxy for learning a SAT solver is an interesting observation and provides us with an end-to-end differentiable architecture, the model is not directly trained toward solving a SAT problem (unlike Reinforcement Learning). As we will see later in this paper, that can indeed in poor generalization and sub-optimal models. In this paper, we propose a neural Circuit-SAT solver framework that effectively belongs to the second class above; that is, it learns the entire solution structure from scratch. More importantly, to train such model, we propose a training strategy that, unlike the typical Policy Gradient, is differentiable end-toend, yet it trains the model directly toward the end goal (similar to Policy Gradient). Furthermore, our proposed training strategy enjoys an Explore-Exploit mechanism for better optimization even though it is not exactly a Reinforcement Learning approach. The other aspect of building neural models for solving combinatorial optimization problems is how the problem instance should be represented by the model. Using classical architectures like RNNs or LSTMs completely ignores the inherent structure present in the problem instances. For this very reason, there has been recently a strong push to employ structure-aware architectures such as different variations of neural graph embedding. Most neural graph embedding methodologies are based on the idea of synchronously propagating local information on an underlying (undirected) graph that represents the problem structure. The intuition behind using local information propagation for embedding comes from the fact that many original combinatorial optimization algorithms can actually be seen propagating information. In our case, since we are dealing with Boolean circuits and circuit are Directed Acyclic Graphs (DAG), we would need an embedding architecture that take into account the special architecture of DAGs (i.e. the topological order of the nodes). In particular, we note that in many DAG-structured problems (such as circuits, computational graphs, query DAGs, etc.), the information is propagated sequentially rather than synchronously, hence a justification to have sequential propagation for the embedding as well. To this end, we propose a rich embedding architecture that implements such propagation mechanism for DAGs. As we see in this paper, our proposed architecture is capable of harnessing the structural information in the input circuits. To summarize, our contributions in this work are three-fold:(a) We propose a general, rich graph embedding architecture that implements sequential propagation for DAG-structured data. (b) We adapt our proposed architecture to design a neural Circuit-SAT solver which is capable of harnessing structural signals in the input circuits to learn a SAT solver. (c) We propose a training strategy for our architecture that is end-to-end differentiable, yet similar to Reinforcement Learning techniques, it directly trains our model toward solving the SAT problem with an Explore-Exploit mechanism. The experimental show the superior performance of our framework especially in terms of generalizing to new problem domains compared to the baseline. Deep learning on graph-structured data has recently become a hot topic in the Machine Learning community under the general umbrella of Geometric Deep Learning. Based on the assumptions they make, these models typically divide into two main categories. In the first category, the graph-structured datapoints are assumed to share the same underlying graph structure (aka the domain) and only differ based on the feature values assigned to each node or edge. The methods in this category operate in both the spatial and the frequency domains; for example, , , Graph Neural Network BID23. In the second category on the other hand, each example in the training data has its own domain (graph structure). Since the domain is varying across datapoints, these other methods mostly operate in the spatial domain and typically can be seen as the generalization of the classical CNNs (e.g. MoNet BID20) or the classical RNNs (e.g. TreeLSTM BID27, ; BID25) or both (e.g.) to the graph domain. In this paper, we extend the single layer DAG-RNN model for DAG-structured data BID3; BID25 to the more general deep version with Gated Recurrent Units, where each layer processes the input DAG either in the forward or the backward direction. On the other hand, the application of Machine Learning (deep learning in specific) to logic and symbolic computation has recently emerged as a bridge between Machine Learning and the classical Computer Science. While works such as BID11; BID1 have shown the effectiveness of (recursive) neural networks in modeling symbolic expressions, others have taken one step further and tried to learn approximate algorithms to solve symbolic NP-complete problems BID16; BID4; BID30. In particular, as opposed to black box methods (e.g. BID4 ; BID30), BID16 have shown that by incorporating the underlying graph structure of a NP-hard problem, efficient search heuristics can be learned for the greedy search algorithm. Although working in the context of greedy search introduces an inductive bias that benefits the sample efficiency of the framework, the ed algorithm is still bounded by the sub-optimality of the greedy search. More recently, Selsal et al. BID24 have introduced the NeuroSAT framework -a deep learning model aiming at learning to solve the Boolean Satisfiability problem (SAT) from scratch without biasing it toward the greedy search. In particular, they have primarily approached the SAT problem as a binary classification problem and proposed a clustering-based post-processing analysis to find a SAT solution from the latent representations extracted from the learned classifier. Although, they have shown the empirical merits of their proposed framework, it is not clear why the proposed post-processing clusetring should find the SAT solution without being explicitly trained toward that goal. In this paper, we propose a deep learning framework for the Circuit-SAT problem (a more general form of the SAT problem), but in contrast to NeuroSAT, our model is directly trained toward finding SAT solutions without requiring to see them in the training sample. In this section, we formally formulate the problem of learning on DAG-structured data and propose a deep learning framework to approach the problem. It should be noted that even though this framework has been developed for DAGs, the underlying dataset can be a general graph as long as an explicit ordering for the nodes of each graph is available. This ordering is naturally induced by the topological sort algorithm in DAGs or can be imposed on general undirected graphs to yield DAGs. Let G = V G, E G denote a Directed Acyclic Graph (DAG). We assume the the set of nodes of G are ordered according to the topological sort of the DAG. For any node v ∈ V G, π G (v) represents the set of direct predecessors of v in G. Also for a given DAG G, we define the reversed DAG, G r with the same set of nodes but reversed edges. When topologically sorted, the nodes of G r appear in the reversed order of those of G. Furthermore, for a given G, let µ G: V G → R d be a d-dimensional vector function defined on the nodes of G. We refer to µ G as a DAG function -i.e. a function that is defined on a DAG. Note that the notation µ G implicitly induces the DAG structure G along with the vector function defined on the DAG. Figure 1(a) shows an example DAG function with d = 3. Finally, let G d denote the space of all possible d-dimensional functions µ G (along with their underlying graphs G). We define the parametric functional DISPLAYFORM0 The next step is to define the mathematical form of the functional F θ. In this work, we propose: DISPLAYFORM0 Intuitively, E β: G d → G q is the embedding function that maps the input d-dimensional DAG functions into a q-dimensional DAG function space. Note that the embedding function in general may transform both the underlying DAG size/structure as well as the the DAG function defined on it. In this paper, however, we assume it only transforms the DAG function and keeps the input DAG structure intact. Once the DAG is embedded into the new space, we apply the fixed pooling function P: G q → G q on the embedded DAG function to produce a (possibly) aggregated version of it. For example, if we are interested in DAG-level predictions, P can be average pooling across all nodes of the input DAG to produce a singleton DAG; whereas, in the case of node-level predictions, P is simply the Identity function. In this paper, we set P to retrieve only the sink nodes in the input DAG. Finally, the classification function C α: G q → O is applied on the aggregated DAG function to produce the final prediction output in O. In this work, we set C α to be a multi-layer neural network. The tuple θ = α, β identifies all the free parameters of the model. The (supervised) embedding of graph-based data into the traditional vector spaces has been a hot topic recently in the Machine Learning community BID18; BID25; BID27. Many of these frameworks are based on the key idea of representing each node in the input graph by a latent vector called the node state and update these latent states via an iterative (synchronous) propagation mechanism that takes the graph structure into account. Two of these methodologies that are closely related to the proposed framework in this paper are the Gated Graph Sequence Neural Networks (GGS-NN) BID18 and DAG Recurrent Neural Networks (DAG-RNN) BID25. While GGS-NNs apply multi-level Gated Recurrent Unit (GRU) like updates in an iterative propagation scheme on general (undirected) graphs, DAG-RNNs apply simple RNN logic in a one-pass, sequential propagation mechanism from the input DAG's source nodes to its sink nodes. Our proposed framework is built upon the DAG-RNN framework BID25 but it enriches this framework further by incorporating key ideas from GGS-NNs BID18, Deep RNNs BID22 and sequence-to-sequence learning BID26. Before we explain our framework, it is worth noting that assiging input feature/state vectors to each node is equivalent to defining a DAG function in our framework. For the sake of notational simplicity, for the input DAG function µ G, we define the d-dimensional node feature vector DISPLAYFORM0 Given the node feature vectors x v for an input DAG, the update rule for the state vector at each node is defined as: DISPLAYFORM1 where GRU is the standard function applied on the input vector at node v and the aggregated state of its direct predecessors which in turn is computed by the aggregator function A: 2 DISPLAYFORM2 The aggregator function is defined as a tunable deep set function BID32 with free parameters that is invariant to the permutation of its inputs. The main difference between these proposed updates rules and the ones in DAG-RNN is in DAG-RNN, we have the simple RNN logic instead of GRU, and the aggregation logic is simply (fixed) summation. By applying the update logic in equation 2 sequentially on the nodes of the input DAG processed in the topological sort order, we compute the state vector h v for all nodes of G in one pass. This would complete the one layer (forward) embedding of the input DAG function, or E β (µ G) = δ G. Note that the same way that DAG-RNNs are the generalization of RNNs on sequences to DAGs, our proposed one-layer embedding can be seen as the generalization of GRU-NNs on sequences to DAGs. Furthermore, we introduce the reversed layers (denoted by E r) that are similar to the regular forward layers except that the input DAG is processed in the reversed order. Alternatively, reversed layers can be seen as regular layers that process the reversed version of the input DAG G r; that is, DISPLAYFORM3 The main reason we have introduced reversed layers in our framework is because in the regular forward layers, the state vector for each node is only affected by the information flowing from its ancestor nodes; whereas, the information from the descendant nodes can also be highly useful for the learning task in hand. The reversed layers provide such information for the learning task. Furthermore, the introduction of reversed layers is partly motivated by the successful application of processing sequences backwards in sequence-to-sequence learning BID26. Sequences can be seen as special-case linear DAGs; as a , reversed layers can be interpreted as the generalized version of reversing sequences. The natural extension of the one-layer embedding is the stacked L-layer version where the ith layer has its own parameters β i and output DAG function dimensionality q i. Furthermore, the stacked L layers can be sequentially applied T times in the recurrent fashion to generate the final embedding: DISPLAYFORM0 DISPLAYFORM1 where β = β 1,..., β L, H is the list of the parameters and P roj H: G q L → G d is a linear projection with the projection matrix H d×q L that simply adjusts the output dimensionality of E stack so it can Figure 1: (a) A toy example input DAG function µ G, (b) a DG-DAGRNN model that processes the input in (a) using two sequential DAG embedding layers: a forward layer followed by a reverse layer. The solid red and green arrows show the flow of information within each layer while the black arrows show the feed-forward flow of information in between the layers. Also, the dotted blue arrows show the recurrent flow of information from the last embedding layer back to the first one.be fed back to E stack as the input. In our experiments, we have found that by letting T > 1, we can significantly improve the accuracy of our models without introducing more trainable parameters. In practice, we fix the value of T during training and increase it during testing to achieve better accuracy. Also note that the L stacked layers in E stack can be any permutation of regular and reversed layers. We refer to this proposed framework as Deep-Gated DAG Recursive Neural Networks or DG-DAGRNN for short. Figure 1(b) shows an example 2-layer DG-DAGRNN model with one forward layer followed by a reversed layer. The Circuit Satisfiability problem (aka Circuit-SAT) is a fundamental NP-complete problem in Computer Science. The problem is defined as follows: given a Boolean expression consists of Boolean variables, parentheses, and logical gates (specifically And ∧, Or ∨ and Not ¬), find an assignment to the variables such that it would satisfy the original expression, aka a solution. If the expression is not satisfiable, it will be labeled as UNSAT. Moreover, when represented in the circuit format, Boolean expressions can aggregate the repetitions of the same Boolean sub-expression in the expression into one node in the circuit. This is also crucial from the learning perspective as we do not want to learn two different representations for the same Boolean sub-expression. In this section, we apply the framework from the previous section to learn a Circuit-SAT solver merely from data. More formally, a Boolean circuit can be modeled as a DAG function µ G with each node representing either a Boolean variable or a logical gate. In particular, we have µ G: V G → R 4 defined as µ G (v) = One-Hot(type(v)), where type(v) ∈ {And, Or, Not, Variable}. All the source nodes in a circuit µ G have type Variable. Moreover, each circuit DAG has only one sink node (the root node of the Boolean expression).Similar to BID24, we could also approach the Circuit-SAT problem from two different angles: predicting the circuit satisfiability problem as a binary classification problem, and solving the Circuit-SAT problem directly by generating a solution if the input circuit is indeed SAT. In BID24, solving the former is the prerequisite for solving the latter. However, that is not the case in our proposed model and since we are interested to actually solve the SAT problems, we do not focus on the binary classification problem. Nevertheless, our model can be easily adapted for SAT classification, as illustrated in Appendix A. Learning to solve SAT problems (i.e.finding a satisfying assignment) is indeed a much harder problem than SAT/UNSAT classification. In the NeuroSAT framework, BID24, the authors have proposed a post-processing unsupervised procedure to decode a solution from the latent state representations of the Boolean literals. Although this approach works empirically for many SAT problems, it is not clear that it would also work for the Circuit-SAT problems. But more importantly, it is not clear why this approach should decode the SAT problems in the first place because the objective function used in BID24 does not explicitly contain any component for solving SAT problems; in fact, the decoding procedure is added as a secondary analysis after training. In other words, the model is not optimized toward actually finding SAT assignments. In contrast, in this paper, we pursue a completely different strategy for training a neural Circuit-SAT solver. In particular, using the DG-DAGRNN framework, we learn a neural functional F θ on the space of circuits µ G such that given an input circuit, it would directly generate a satisfying assignment for the circuit if it is indeed SAT. Moreover, we explicitly train F θ to generate SAT solutions without requiring to see any actual SAT assignment during training. Our proposed strategy for training F θ is reminiscent of Policy Gradient methods in Deep Reinforcement Learning BID2.The Solver Network. We start with characterizing the components of F θ. First, the embedding function E β is set to be a multi-layer recursive embedding as in equation 3 with interleaving regular forward and reversed layers making sure that the last layer is a reversed layer so that we can read off the final outputs of the embedding from the Variable nodes (i.e. the sink nodes of the reversed DAG). The classification function C α is set to be a MLP with ReLU activation function for the hidden layers and the Sigmoid activation for the output layer. The output space here encodes the soft assignment (i.e. in range) to the corresponding variable node in the input circuit. We also refer to F θ as the solver or the policy network. The Evaluator Network. Furthermore, for any given circuit µ G, we define the soft evaluation function R G as a DAG computational graph that shares the same topology G with the circuit µ G except that the And nodes are replaced by the smooth min function, the Or nodes by the smooth max function and the Not nodes by N (z) = 1 − z function, where z is the input to the Not node. The smooth min and max functions are defined as: DISPLAYFORM0 where τ ≥ 0 is the temperature. For τ = +∞, both S max and S min are the arithmetic mean functions. As τ → 0, we have S max → max and S min → min. One can also show that ∀a = (a 1, ..., a n): min(a) < S min (a) < S max (a) < max(a). More importantly, as opposed to the min and max functions, their smooth versions are fully differentiable w.r.t. all of their inputs. As its name suggests, the soft evaluation function evaluates a soft assignment (i.e. in) to the variables of the circuit. In particular, at a low enough temperature, if for a given input assignment, R G yields a value strictly greater than 0.5, then that assignment (or its hard counterpart) can be seen as a satisfying solution for the circuit. We also refer to R G as the evaluator or the reward network. Note that the evaluator network does not have any trainable parameter. Encoding logical expressions into neural networks is not new per se as there has been recently a push to enrich deep learning with symbolic computing BID14 BID31. What are new in our framework, however, are two folds: (a) each graph example in our dataset induces a different evaluation network as opposed to having one fixed network for the entire dataset, and (b) by encoding the logical operators as smooth min and max functions, we provide a more efficient framework for back-propagating the gradients and speeding up the learning as the , as we will see shortly. The Optimization. Putting the two pieces together, we define the satisfiability function DISPLAYFORM1 Intuitively, the satisfiability function uses the solver network to produce an assignment for the input circuit and then feeds the ed assignment to the evaluator network to see if it satisfies the circuit. We refer to the final output of S θ as the satisfiability value for the input circuit, which is a real number in. Having computed the satisfiability value, we define the loss function as the smooth Step function: DISPLAYFORM2 where s = S θ (µ G) and κ ≥ 1 is a constant. By minimizing the loss function in equation 7, we push the solver network to produce an assignment that yields a higher satisfiability value S θ (µ G). For satisfiable circuits this would eventually in finding a satisfiable assignment for the circuit. However, if the input circuit is UNSAT, the maximum achievable value for S θ is 0.5 as we have shown in Appendix B. In practice though, the inclusion of UNSAT circuits in the training data slows down the training process mainly because the UNSAT circuits keep confusing the solver network as it tries hard to find a SAT solution for them. For this very reason, in this scheme, we only train on SAT examples and exclude the UNSAT circuits from training. Nevertheless, if the model has enough capacity, one can still include the UNSAT examples and pursue the training as a pure unsupervised learning task since the true SAT/UNSAT labels are not used anywhere in equation 7.Moreover, the loss function in equation 7 has a nice property of having higher gradients for satisfiability values close to 0.5 when κ > 1 (we set κ = 10 in our experiments). This means that the gradient vector in backpropagation is always dominated by the examples closer to the decision boundary. In practice, that would mean that the training algorithm immediately in the beginning pushes the easier examples in the training set (with satisfiability values close to 0.5) to the SAT region (> 0.5) with a safety margin from 0.5. As the training progresses, harder examples (with satisfiability values close to 0) start moving toward the SAT region. As mentioned before, the proposed learning scheme in this section can be seen as a variant of Policy Gradient methods, where the solver network represents the policy function and the evaluator network acts as the reward function. The main difference here is that in our problem the mathematical form of the reward function is fully known and is differentiable so the entire pipeline can be trained using backpropagation in an end-to-end fashion to maximize the total reward over the training sample. Exploration vs. Exploitation. The reason we use the smooth min and max functions in the evaluator network instead of the actual min and max is that in a min-max circuit, the gradient vector of the output of the circuit w.r.t. its inputs has at most one non-zero entry 1. That is, the circuit output is sensitive to only one of its inputs in the case of infinitesimal changes. For fixed input values, we refer to this input as the active input and to the path from the active input to the output as the active path. In the case of a min-max evaluator, the gradients flow back only through the active path of the evaluator network forcing the solver network to change such that it can satisfy the input circuit through its active path only. This strategy however is quite myopic and, as we observed empirically, leads to slow training and sub-optimal solutions. To avoid this effect, we use the smooth min and max functions in the evaluator network to allow the gradients to flow through all paths in the input circuit. Furthermore, in the beginning of the training we start with a high temperature value to let the model explore all paths in the input circuits for finding a SAT solution. As the training progresses, we slowly anneal the temperature toward 0 so that the model exploits more active path(s) for finding a solution. One annealing strategy is to let τ = t −, where t is timestep and is the annealing rate. In our experiments we set = 0.4. It should be noted that at the test time, the smooth min and max functions are replaced by their non-smooth versions. Prediction. Given a test circuit µ G, we evaluate s = S θ (µ G) = R G F θ (µ G). If s > 0.5 then the circuit is classified as SAT and the SAT solution is provided by F θ (µ G). Otherwise, the circuit is classified as UNSAT. This way, unlike SAT classification, we predict SAT for a given circuit only if we have already found a SAT solution for it. In other words, our model never produces false positives. We have formally proved this in Appendix B. Moreover, at the prediction time, we do not need to set the number of recurrences T in equation 3 to the same value we used for training. In fact, we have observed by letting T to be variable on the per example basis, we can improve the model accuracy quite significantly at the test time. The baseline method we have compared our framework to is the NeuroSAT model by BID24. Like most classical SAT solvers, NeuroSAT assumes the input problem is given in the Conjunctive Normal Form (CNF). Even though that is a fair assumption in general, in some cases, the input does not naturally come as CNF. For instance, in hardware verification, the input problems are often in the form of circuits. One can indeed convert the circuit format into CNF in polynomial time using Tseitin transformation. However, such transformation will introduce extra variables (i.e. the derived variables) which may further complicate the problem for the SAT solver. More importantly, as a number of works in the SAT community have shown, such transformations typically lose the structural information embedded in the circuit format, which otherwise can be a rich source of information for the SAT solver, BID28;;; BID12;. As a , there has been quite an effort in the classical SAT community to develop SAT solvers that directly work with the circuit format BID28; BID15. In the similar vein, our neural framework for learning a SAT solver enables us to harness such structural signals in learning by directly consuming the circuit format. That contrasts the NeuroSAT approach which cannot in principle benefit from such structural information. Despite this clear advantage of our framework to NeuroSAT, in this paper, we assume the (raw) input problems come in CNF, just so we can make a fair comparison to NeuroSAT. Instead for our method, we propose to use pre-processing methods to convert the input CNF into circuit that has the potential of injecting structural information into the circuit structure. In particular, if available, one can in principle encode problem-specific heuristics into the structure while building the circuit. For example, if there is a variable ordering heuristic available for a specific class of SAT problems, it can be used to build that target circuit in a certain way, as discussed in Appendix C. Note that we could just consume the original CNF; after all, CNF is a (flat) circuit, too. But as we empirically observed, that would negatively affect the , which again highlights the fact that our proposed framework has been optimized to utilize circuit structure as much as possible. Both our method and NeuroSAT require a large training sample size for moderate size problems. The good news is both methods can effectively be trained on an infinite stream of randomly generated problems in real-world applications. However, since we ran our experiments only on one GPU with limited memory, we had to limit the training sample size for the purpose of experimentation. This in turn restricts the maximum problem sizes we could train both models on. Nevertheless, our method can generalize pretty well to out-of-sample SAT problems with much larger sizes, as shown below. We have used the generation process proposed in the NeuroSAT paper BID24 to generate random k-SAT CNF pairs (with k stochastically set according to the default settings in BID24). These pairs are then directly fed to NeuroSAT for training. For our method, on the other hand, we first need to convert these CNFs into circuits 2. In Appendix C, we have described the details of this conversion process. Experimental Setup: We have trained a DG-DAGRNN model (i.e. our framework) and a NeuroSAT model on a dataset of 300K SAT and UNSAT pairs generated according to the scheme proposed in BID24. The number of Boolean variables in the problems in this dataset ranges from 3 to 10. We have designed both models to have roughly ∼ 180K tunable parameters. In particular our model has two DAG embedding layers: a forward layer followed by a reversed layer, each with the embedding dimension q = 100. The classifier is a 2-layer MLP with hidden dimensionality 30. The aggregator function A(·) consists of two 2-layer MLPs, each with hidden dimensionality 50. For training, we have used the Adam optimization algorithm with learning rate of 10 −5, weight decay of 10 −10 and gradient clipping norm of 0.65. We have also applied a dropout mechanism for the aggregator function during training with the rate of 20%. For the NeuroSAT model, we have used the default hyper-parameter settings proposed in BID24. Finally, since our model does not produce false positives, we have only included satisfiable examples in the test data for all experiments. In-Sample Results: Once we trained the two models, the main performance metric we measure is the percentage of SAT problems in the test set that each model can actually find a SAT solution for.3 FIG1 (Left) shows this metric on a test set from the same distribution for both our model and NeuroSAT as the number of recurrences (or propagation iterations for NeuroSAT) T increases. Not surprisingly, both methods are able to decode more SAT problems as we increase T. However, our method converges much faster than NeuroSAT (to a slightly smaller value). In other words, compared to NeuroSAT, our method requires smaller of number of iterations at the test time to decode SAT problems. We conjecture this is due to the fact the sequential propagation mechanism in DG-DAGRNN is more effective in decoding the structural information in the circuit format for the SAT problem than the synchronous propagation mechanism in NeuroSAT for the flat CNF. Out-of-Sample Results: Furthermore, we evaluated both trained models on test datasets drawn from different distributions than the training data with much larger number of variables (20, 40, 60 and 80 variables, in particular). We let both models iteratively run on each test dataset until the test metric converges. FIG1 (Right) shows the test metric for both methods on these datasets after convergence. As the demonstrate, compared to that of our method, the performance of NeuroSAT declines faster as we increase the number variables during test time, with a significant margin. In other words, our method generalizes better to out-of-sample, larger problems during the test time. We attribute this to the fact that NeuroSAT is trained toward the SAT classification problem as a proxy to learn a solver. This may in the classifier picking up certain features that are informative for classification of in-sample examples which are, otherwise, harmful (or useless at best) for learning a solver for out-of-sample examples. Our framework, on the other hand, simply does not suffer from such problem because it is directly trained toward solving the SAT problem. Time Complexity: We trained both our model and NeuroSAT for a day on a single GPU. To give an idea of the test time complexity, it took both our model and NeuroSAT roughly about 3s to run for 40 iterations on a single example of 20 variables. We also measured the time that it takes for a modern SAT Solver (MiniSAT here) to solve a similar example to be roughly about 0.7s in average. Despite this difference, our neural approach is way more prallelizable compared to modern solvers such that many examples can be solved concurrently in a single batch on GPU. For example, while it took MiniSAT 114min to solve a set of 10, 000 examples, it took our method only 8min to solve for the same set in a batch-processing fashion on GPU. This indeed shows another important advantage of our neural approach toward SAT solving in large-scale applications: the extreme parallelization. To further evaluate the generalization performance of the trained models from the previous section, we have tested them on SAT problems coming from an entire different domain than k-SAT problems. In particular, we have chosen the graph k-coloring decision problem which belongs to the class of NP-complete problems. In short, given an undirected graph G with k color values, in graph k-coloring decision problem, we seek to find a mapping from the graph nodes to the color set such that no adjacent nodes in the graph have the same color. This classical problem is reducible to SAT. Moreover, the graph topology in general contains valuable information that can be further injected into the circuit structure when preparing circuit representation for our model. Appendix D illustrates how we incorporate this information to convert instances of the graph k-coloring problem into circuits. For this experiment, we have generated two different test datasets:Dataset-1: We have generated a diverse set of random graphs with number of nodes ranging between 6 and 10 and the edge percentage of 37%. The random graphs are evenly generated according to six distinct distributions: Erdos-Renyi, Barabasi-Albert, Power Law, Random Regular, Watts-Strogatz and Newman-Watts-Strogatz. Each generated graph is then paired with a random color number in 2 ≤ k ≤ 4 to generate a graph k-coloring instance. We only keep the SAT instances in the dataset. We first generate random trees with the same number of nodes as Dataset-1. Then each tree is paired with a random color number in 2 ≤ k ≤ 4. Since the chromatic number of trees is 2, every single pair so far is SAT. Lastly, for each pair we keep adding random edges to the graph until it becomes UNSAT, then we remove the last added edge to make the instance SAT again and stop. Even though Dataset-1 has much higher coverage in terms of different graph distributions, Dataset-2 contains harder SAT examples in general, simply because in average, it contains maximally constrained instances that are still SAT. We evaluated both our method and NeuroSAT (which were both trained on k-SAT-3-10) on these test datasets. Our method could solve 48% and 27% of the SAT problems in Dataset-1 and Dataset-2, respectively. However, to our surprise, the same NeuroSAT model that generated the out-of-sample on k-SAT datasets in FIG1, could not solve any of the SAT graph k-coloring problems in Dataset-1 and Dataset-2, even after 128 propagation iterations. This does not match the reported in BID24 on graph coloring. We suspect different CNF formulations for the graph k-coloring problem might be the cause behind this discrepancy, which would mean that NeuroSAT is quite sensitive to the change of problem distribution. Nevertheless, the final judgment remains open up to further investigations. In a separate effort, we tried to actually train a fresh NeuroSAT model on a larger versions of Dataset-1 and Dataset-2 which also included UNSAT examples. However, despite a significant decrease on the classification training loss, NeuroSAT failed to decode any of the SAT problems in the test sets. We attribute this behavior to the fact that NeuroSAT is dependent on learning a good SAT classifier that can capture the conceptual essence of SAT vs. UNSAT. As a , in order to avoid learning superficial classification features, NeuroSAT restricts its training to a strict regime of SAT-UNSAT pairs, where the two examples in a pair only differ in negation of one literal. However, such strict regime can be only enforced in the random k-SAT problems. For graph coloring, the closest strategy we could come up with was the one in Dataset-2, where the SAT-UNSAT examples in a pair only differ in an edge (which still translates to a couple of clauses in the CNF). This again signifies the importance of learning the solver directly rather than relying on a classification proxy. In this paper, we proposed a neural framework for efficiently learning a Circuit-SAT solver. Our methodology relies on two fundamental contributions: a rich DAG-embedding architecture that implements the sequential propagation mechanism on DAG-structured data and is capable of learning useful representations for the input circuits, and an efficient training procedure that trains the DAGembedding architecture directly toward solving the SAT problem without requiring SAT/UNSAT labels in general. Our proposed training strategy is fully differentiable end-to-end and at the same time enjoys many features of Reinforcement Learning such as an Explore-Exploit mechanism and direct training toward the end goal. As our experiments showed, the proposed embedding architecture is able to harness structural information in the input DAG distribution and as a solve the test SAT cases in a fewer number of iterations compared to the baseline. This would also allow us to inject domain-specific heuristics into the circuit structure of the input data to obtain better models for that specific domain. Moreover, our direct training procedure as opposed to the indirect, classification-based method in NeuroSAT enables our model to generalize better to out-of-sample test cases, as demonstrated by the experiments. This superior generalization got even more expressed as we transferred the trained models to a complete new domain (i.e. graph coloring). Furthermore, we argued that not only does direct training give us superior out-of-sample generalization, but it is also essential for the problem domains where we cannot enforce the strict training regime where SAT and UNSAT cases come in pairs with almost identical structures, as proposed by BID24.Future efforts in this direction would include closely examining the SAT solver algorithm learned by our framework to see if any high-level knowledge and insight can be extracted to further aide the classical SAT solvers. Needless to say, this type of neural models have a long way to go in order to compete with industrial SAT solvers; nevertheless, these preliminary are promising enough to motivate the community to pursue this direction. We would like to thank Leonardo de Moura and Nikolaj Bjorner from Microsoft Research for the valuable feedback and discussions. In the classification problem, we are interested to merely classify each input circuit as SAT or UNSAT.To do so, we customize DG-DAGRNN framework as follows. The classification function C α is set to be a MLP with ReLU activation function for the hidden layers and the Sigmoid activation for the output layer. As the the output space O will become. The embedding function E β is set to be a multi-layer recursive embedding as in equation 3 with interleaving regular forward and reversed layers. For the classification problem, we make sure the last layer of the embedding is a forward layer so that we can read off from only one sink node (i.e. the expression root node) and feed the to the classification function for the final prediction. Finally given a labeled training set, we minimize the standard cross-entropy loss via the end-to-end backpropagation through the entire network.(Only-If) Proof by contradiction: let us assume that there exists a soft assignment a ∈ n such that R G (a) > 0.5, then using Lemma 1 and the definition of H(·), we will have:µ G H(a) = H R G (a) = I R G (a) > 0.5 = 1In other words, we have found a hard assignment H(a) that satisfies the circuit µ G; this is a contradiction. APPENDIX C: CONVERTING CNF TO CIRCUIT There are many ways one can convert a CNF to a circuit; some are optimized toward extracting structural information -e.g. BID12. Here, we have taken a more intuitive and general approach based on the Cube and Conquer paradigm (Heule et al. FORMULA1) for solving CNF-SAT problems. In the Cube and Conquer paradigm, for a given input Boolean formula F, a variable x in F is picked and set to TRUE once to obtain F x contains x, we effectively reduce the complexity of the original SAT problem by removing one variable. This process can be repeated recursively (up to a fixed level) for F + x and F − x by picking a new variable to reduce the complexity even further. Now inspired by this paradigm, one can easily show that the following logical equivalence holds for any variable x in F: DISPLAYFORM0 And this is exactly the principle we used to convert a CNF formula F into a circuit. In particular, by applying the equivalence in equation 8 recursively, up to a fixed level 4, we perform the CNF to circuit conversion (Note that F We know from Computer Science theory that the graph k-coloring problem can be reduced to the SAT problem by representing the problem as a Boolean CNF. There are many ways in the literature to do so; we have picked the Muldirect approach from . In particular, for a graph with N nodes and maximum k allowed colors, we define the Boolean variables x ij for 1 ≤ i ≤ N and 1 ≤ j ≤ k, where x ij = 1 indicates that the ith node is colored by the jth color. Then, the CNF encoding the decision graph k-coloring problem is defined as: DISPLAYFORM0 where E is the set of the graph edges. The left set of clauses in equation 9 ensure that each node of the graph takes at least one color. The right set of clauses in equation 9 enforce the constraint that the neighboring nodes cannot take the same color. As a , any satisfiable solution to the CNF in equation 9 corresponds to at least one coloring solution for the original problem if not more. Note that in this formulation, we do not require each node to take only one color value; therefore, one SAT solution can produce multiple valid graph coloring solutions. To generate a circuit from the above CNF, we note that the graph structure in graph coloring problem contains valuable structural information that can be potentially encoded as heuristics into the circuit structure. One such good heuristics, in particular, is the node degrees. More specifically, the most constrained variable first heuristic in Constraint Satisfaction Problems (CSPs) recommends assigning values to the most constrained variable first. In graph coloring problem, the higher the node degree, the more constrained the variables associated with that node are. Therefore, sorting the graph nodes based on their degrees would give us a meaningful variable ordering, which can be further used to build the circuit using the equivalence in equation 8, for example. | We propose a neural framework that can learn to solve the Circuit Satisfiability problem from (unlabeled) circuit instances. | 696 | scitldr |
Sequence generation models such as recurrent networks can be trained with a diverse set of learning algorithms. For example, maximum likelihood learning is simple and efficient, yet suffers from the exposure bias problem. Reinforcement learning like policy gradient addresses the problem but can have prohibitively poor exploration efficiency. A variety of other algorithms such as RAML, SPG, and data noising, have also been developed in different perspectives. This paper establishes a formal connection between these algorithms. We present a generalized entropy regularized policy optimization formulation, and show that the apparently divergent algorithms can all be reformulated as special instances of the framework, with the only difference being the configurations of reward function and a couple of hyperparameters. The unified interpretation offers a systematic view of the varying properties of exploration and learning efficiency. Besides, based on the framework, we present a new algorithm that dynamically interpolates among the existing algorithms for improved learning. Experiments on machine translation and text summarization demonstrate the superiority of the proposed algorithm. Sequence generation is a ubiquitous problem in many applications, such as machine translation BID28, text summarization BID13 BID25, image captioning BID15, and so forth. Great advances in these tasks have been made by the development of sequence models such as recurrent neural networks (RNNs) with different cells BID12 BID6 and attention mechanisms BID1 BID19. These models can be trained with a variety of learning algorithms. The standard training algorithm is based on maximum-likelihood estimation (MLE) which seeks to maximize the log-likelihood of ground-truth sequences. Despite the computational simplicity and efficiency, MLE training suffers from the exposure bias BID24. That is, the model is trained to predict the next token given the previous ground-truth tokens; while at test time, since the ing model does not have access to the ground truth, tokens generated by the model itself are instead used to make the next prediction. This discrepancy between training and test leads to the issue that mistakes in prediction can quickly accumulate. Recent efforts have been made to alleviate the issue, many of which resort to the reinforcement learning (RL) techniques BID24 BID2 BID8. For example, BID24 adopt policy gradient BID29 that avoids the training/test discrepancy by using the same decoding strategy. However, RL-based approaches for sequence generation can face challenges of prohibitively poor sample efficiency and high variance. For more practical training, a diverse set of methods has been developed that are in a middle ground between the two paradigms of MLE and RL. For example, RAML adds reward-aware perturbation to the MLE data examples; SPG BID8 leverages reward distribution for effective sampling of policy gradient. Other approaches such as data noising BID34 ) also show improved . In this paper, we establish a unified perspective of the broad set of learning algorithms. Specifically, we present a generalized entropy regularized policy optimization framework, and show that the apparently diverse algorithms, such as MLE, RAML, SPG, and data noising, can all be re-formulated as special instances of the framework, with the only difference being the choice of reward and the values of a couple of hyperparameters (FIG0). In particular, we show MLE is equivalent to using a delta-function reward that assigns 1 to samples that exactly match data examples while −∞ to any other samples. Such extremely restricted reward has literally disabled any exploration of the model beyond training data, yielding the exposure bias. Other algorithms essentially use rewards that are more smooth, and also leverage model distribution for exploration, which generally in a larger effective exploration space, more difficult training, and better test-time performance. Besides the new understandings of the existing algorithms, the unified perspective also facilitates to develop new algorithms for improved learning. We present an example new algorithm that, as training proceeds, gradually expands the exploration space by annealing the reward and hyperparameter values. The annealing in effect dynamically interpolates among the existing algorithms. Experiments on machine translation and text summarization show the interpolation algorithm achieves significant improvement over the various existing methods. Sequence generation models are usually trained to maximize the log-likelihood of data by feeding the ground-truth tokens during decoding. Reinforcement learning (RL) addresses the discrepancy between training and test by also using models' own predictions at training time. Various RL approaches have been applied for sequence generation, such as policy gradient BID24 and actor-critic BID2. Softmax policy gradient (SPG) BID8 additionally incorporates the reward distribution to generate high-quality sequence samples. The algorithm is derived by applying a log-softmax trick to adapt the standard policy gradient objective. Reward augmented maximum likelihood (RAML) is an algorithm in between MLE and policy gradient. It is originally developed to go beyond the maximum likelihood criteria and incorporate task metric (such as BLEU for machine translation) to guide the model learning. Mathematically, RAML shows that MLE and maximum-entropy policy gradient are respectively minimizing KL divergences in opposite directions. We reformulate both SPG and RAML in a new perspective, and show they are precisely instances of a general entropy regularized policy optimization framework. The new framework provides a more principled formulation for both algorithms. Besides the algorithms discussed in the paper, there are other learning methods for sequence models. For example, Hal BID11 BID16; BID32 use a learningto-search paradigm for sequence generation or structured prediction. Scheduled Sampling adapts MLE by randomly replacing ground-truth tokens with model predictions as the input for decoding the next-step token. Our empirical comparison shows improved performance of the proposed algorithm. Policy optimization for reinforcement learning is studied extensively in robotics and game environment. For example, BID23 introduce a relative entropy regularization to reduce information loss during learning. BID26 develop a trust-region approach for monotonic improvement. BID7; BID17; BID0 study the policy optimization algorithms in a probabilistic inference perspective. The entropy-regularized policy optimization formulation presented here can be seen as a generalization of many of the previous policy optimization methods, as shown in the next section. Besides, we formulate the framework in the sequence generation context. We first present a generalized formulation of an entropy regularized policy optimization framework, to which a broad set of learning algorithms for sequence generation are connected. In particular, we show the conventional maximum likelihood learning is a special case of the policy optimization formulation. This provides new understandings of the exposure bias problem as well as the exploration efficiency of the algorithms. We further show that the framework subsumes as special cases other well-known learning methods that were originally developed in diverse perspectives. We thus establish a unified, principled view of the broad class of works. Let us first set up the notations for the sequence generation setting. Let x be the input and y = (y 1, . . ., y T) the sequence of T tokens in the target space. For example, in machine translation, x is the sentence in source language and y is in target language. Let (x, y *) be a training example drawn from the empirical data distribution, where y * is the ground truth sequence. We aim to learn a sequence generation model p θ (y|x) = t p θ (y t |y 1:t−1, x) parameterized with θ. The model can, for example, be a recurrent network. It is worth noting that though we present in the sequence generation context, the formulations can straightforwardly be extended to other settings such as robotics and game environment. Policy optimization is a family of reinforcement learning (RL) algorithms that seeks to learn the parameter θ of the model p θ (a.k.a policy). Given a reward function R(y|y *) ∈ R (e.g., BLEU score in machine translation) that evaluates the quality of generation y against the true y *, the general goal of policy optimization is to maximize the expected reward. A rich research line of entropy regularized policy optimization (ERPO) stabilizes the learning by augmenting the objective with information theoretic regularizers. Here we present a generalized formulation of ERPO. Assuming a general distribution q(y|x) (more details below), the objective we adopt is written as DISPLAYFORM0 where KL(· ·) is the Kullback-Leibler divergence forcing q to stay close to p θ; H(·) is the Shannon entropy imposing maximum entropy assumption on q; and α and β are balancing weights of the respective terms. In the RL literature, the distribution q has taken various forms, leading to different policy optimization algorithms. For example, setting q to a non-parametric policy and β = 0 in the prominent relative entropy policy search BID23 algorithm. Assuming q as a parametric distribution and α = 0 leads to the commonly-used maximum entropy policy gradient BID36 BID10. Letting q be a variational distribution and β = 0 corresponds to the probabilistic inference formulation of policy gradient BID0 BID17. Related objectives have also been used in other popular RL algorithms BID26 BID30.We assume a non-parametric q. The above objective can be maximized with an EM-style procedure that iterates two coordinate ascent steps optimizing q and θ, respectively. At iteration n: DISPLAYFORM1 The E-step is obtained with simple Lagrange multipliers. Note that q has a closed-form solution in the E-step. We can have an intuitive interpretation of its form. First, it is clear to see that if α → ∞, we have q n+1 = p n θ. This is also reflected in the objective Eq. where the weight α encourages q to be close to p θ. Second, the weight β serves as the temperature of the q softmax distribution. In particular, a large temperature β → ∞ makes q a uniform distribution, which is consistent to the outcome of an infinitely large maximum entropy regularization in Eq.. In terms of the M-step, the update rule can be interpreted as maximizing the log-likelihood of samples from the distribution q. In the context of sequence generation, it is sometimes more convenient to express the equations at token level, as shown shortly. To this end, we decompose R(y|y *) along the time steps: DISPLAYFORM2 where ∆R(y t |y *, y 1:t−1) measures the reward contributed by token y t. The solution of q in Eq. can then be re-written as: DISPLAYFORM3 The above ERPO framework has three key hyperparameters, namely (R, α, β). In the following, we show that different values of the three hyperparameters correspond to different learning algorithms (FIG0). We first connect MLE to the above general formulation, and compare and discuss the properties of MLE and regular ERPO from the new perspective. Maximum likelihood estimation is the most widely-used approach to learn a sequence generation model due to its simplicity and efficiency. It aims to find the optimal parameter value that maximizes the data log-likelihood: DISPLAYFORM0 As discussed in section 1, MLE suffers from the exposure bias problem as the model is only exposed to the training data, rather than its own predictions, by using the ground-truth subsequence y * 1:t−1 to evaluate the probability of y * t. We show that the MLE objective can be recovered from Eq. with specific reward and weight configurations. Consider a δ-reward defined as 1: DISPLAYFORM1 Let (R = R δ, α → 0, β = 1). From the E-step of Eq., we have q(y|x) = 1 if y = y * and 0 otherwise. The M-step is therefore equivalent to arg max θ log p θ (y * |x), which recovers precisely the MLE objective in Eq..That is, MLE can be seen as an instance of the policy optimization algorithm with the δ-reward and the above weight values. Any sample y that fails to match precisely the data y * will receive a negative infinite reward and never contribute to model learning. The ERPO reformulation of MLE provides a new statistical explanation of the exposure bias problem. Specifically, a very small α value makes the model distribution ignored during sampling from q, while the δ-reward permits only samples that match training examples. The two factors in effect make void any exploration beyond the small set of training data FIG1 ), leading to a brittle model that performs poorly at test time due to the extremely restricted exploration. On the other hand, however, a key advantage of the δ-reward specification is that its regular reward shape allows extreme pruning of the huge sample space, ing in a space that includes exactly the training examples. This makes the MLE implementation very simple and the computation very efficient in practice. On the contrary, common rewards (e.g., BLEU) used in policy optimization are more smooth than the δ-reward, and permit exploration in a broader space. However, such rewards usually do not have a regular shape as the δ-reward, and thus are not amenable to sample space pruning. Generally, a larger exploration space would lead to a harder training problem. Also, when it comes to the huge sample space, the rewards are still very sparse (e.g., most sequences have BLEU=0 against a reference sequence). Such reward sparsity can make exploration inefficient and even impractical.1 For token-level, define R δ (y1:t|y *) = t/T * if y1:t = y * 1:t and −∞ otherwise, where T * is the length of y *. Note that the R δ value of y = y * can also be set to any constant larger than −∞. Given the opposite algorithm behaviors in terms of exploration and computation efficiency, it is a natural idea to seek a middle ground between the two extremes to combine the advantages of both. A broad set of such approaches have been recently developed. We re-visit some of the popular ones, and show that these apparently divergent approaches can all be reformulated within our ERPO framework (Eqs.1-4) with varying reward and weight specifications. RAML was originally proposed to incorporate task metric reward into the MLE training, and has shown superior performance to the vanilla MLE. Specifically, it introduces an exponentiated reward distribution e(y|y *) ∝ exp{R(y|y *)} where R, as in vanilla policy optimization, is a task metric such as BLEU. RAML maximizes the following objective: DISPLAYFORM0 That is, unlike MLE that directly maximizes the data log-likelihood, RAML first perturbs the data proportionally to the reward distribution e, and maximizes the log-likelihood of the ing samples. The RAML objective reduces to the vanilla MLE objective if we replace the task reward R in e(y|y *) with the MLE δ-reward (Eq.6). The relation between MLE and RAML still holds within our new formulation (Eqs.1-2). In particular, similar to how we recovered MLE from Eq., let (α → 0, β = 1) 2, but set R to the task metric reward, then the M-step of Eq. is precisely equivalent to maximizing the above RAML objective. Formulating within the same framework allows us to have an immediate comparison between RAML and others. In particular, compared to MLE, the use of smooth task metric reward R instead of R δ permits a larger effective exploration space surrounding the training data FIG1 ), which helps to alleviate the exposure bias problem. On the other hand, α → 0 as in MLE still limits the exploration as it ignores the model distribution. Thus, RAML takes a step from MLE toward regular RL, and has effective exploration space size and exploration efficiency in between. SPG BID8 was developed in the perspective of adapting the vanilla policy gradient BID29 to use reward for sampling. SPG has the following objective: DISPLAYFORM0 where R is a common reward as above. As a variant of the standard policy gradient algorithm, SPG aims to address the exposure bias problem and shows promising BID8.We show SPG can readily fit into our ERPO framework. Specifically, taking gradient of Eq. w.r.t θ, we immediately get the same update rule as in Eq. FORMULA1 with (α = 1, β = 0, R = common reward).Note that the only difference between the SPG and RAML configuration is that now α = 1. SPG thus moves a step further than RAML by leveraging both the reward and the model distribution for full exploration FIG1 ). Sufficient exploration at training time would in theory boost the test-time performance. However, with the increased learning difficulty, additional sophisticated optimization and approximation techniques have to be used BID8 to make the training practical. Adding noise to training data is a widely adopted technique for regularizing models. Previous work BID34 has proposed several data noising strategies in the sequence generation context. For example, a unigram noising, with probability γ, replaces each token in data y * with a sample from the unigram frequency distribution. The ing noisy data is then used in MLE training. Though previous literature has commonly seen such techniques as a data pre-processing step that differs from the above learning algorithms, we show the ERPO framework can also subsume data noising as a special instance. Specifically, starting from the ERPO reformulation of MLE which takes (R = R δ, α → 0, β = 1) (section 3.2), data noising can be formulated as using a locally relaxed variant of R δ. For example, assume y has the same length with y * and let ∆ y,y * be the set of tokens in y that differ from the corresponding tokens in y *, then a simple data noising strategy that randomly replaces a single token y * t with another uniformly picked token is equivalent to using a reward R δ (y|y *) that takes 1 when |∆ y,y * | = 1 and −∞ otherwise. Likewise, the above unigram noising BID34 ) is equivalent to using a reward DISPLAYFORM0 where u(·) is the unigram frequency distribution. With a relaxed (i.e., smoothed) reward, data noising expands the exploration space of vanilla MLE locally FIG1 ). The effect is essentially the same as the RAML algorithm (section 3.3), except that RAML expands the exploration space based on the task metric reward. made an early attempt to address the exposure bias problem by exploiting the classic policy gradient algorithm BID29 and mixing it with MLE training. We show in the supplementary materials that the algorithm is closely related to the ERPO framework, and can be recovered with moderate approximations. Section 2 discusses more relevant algorithms for sequence generation learning. We have presented the generalized ERPO framework, and connected a series of well-used learning algorithms by showing that they are all instances of the framework with certain specifications of the three hyperparameters (R, α, β). Each of the algorithms can be seen as a point in the hyperparameter space FIG0 ). Generally, a point with a more restricted reward function R and a very small α tends to have a smaller effective exploration space and allow efficient learning (e.g., MLE), while in contrast, a point with smooth R and a larger α would lead to a more difficult learning problem, but permit more sufficient exploration and better test-time performance (e.g., (softmax) policy gradient).The unified perspective provides new understandings of the existing algorithms, and also facilitates to develop new algorithms for further improvement. Here we present an example algorithm that interpolates the existing ones. The interpolation algorithm exploits the natural idea of starting learning from the most restricted yet easiest problem configuration, and gradually expands the exploration space to reduce the discrepancy from the test time. The easy-to-hard learning paradigm resembles the curriculum learning BID4 ). As we have mapped the algorithms to points in the hyperparameter space, interpolation becomes very straightforward, which requires only annealing of the hyperparameter values. Specifically, in the general update rules Eq., we would like to anneal from using R δ to using smooth common reward, and anneal from exploring by only R to exploring by both R and p θ. Let R comm denote a common reward (e.g., BLEU). The interpolated reward can be written in the form R = λR comm + (1 − λ)R δ, for λ ∈. Plugging R into q in Eq. FORMULA1 and re-organizing the scalar weights, we obtain the numerator of q in the form: c · (λ 1 log p θ + λ 2 R comm + λ 3 R δ), whereModel BLEU MLE 26.44 ± 0.18 RAML 27.22 ± 0.14 SPG BID8 26.62 ± 0.05 MIXER BID24 26.53 ± 0.11 Scheduled Sampling 26.76 ± 0.17Ours 27.82 ± 0.11 Table 1: Results of machine translation.(λ 1, λ 2, λ 3) is defined as a distribution (i.e., λ 1 +λ 2 +λ 3 = 1), and, along with c ∈ R, are determined by (α, β, λ). For example, λ 1 = α/(α + 1). We gradually increase λ 1 and λ 2 and decrease λ 3 as the training proceeds. Further, noting that R δ is a Delta function (Eq.6) which would make the above direct function interpolation problematic, we borrow the idea from the Bayesian spike-and-slab factor selection method BID14. That is, we introduce a categorical random variable z ∈ {1, 2, 3} that follows the distribution (λ 1, λ 2, λ 3), and augment q as q(y|x, z) ∝ exp{c · (1(z = 1) log p θ + 1(z = 2)R comm + 1(z = 3)R δ )}. The M-step is then to maximize the objective with z marginalized out: DISPLAYFORM0 The spike-and-slab adaption essentially transforms the product of experts in q to a mixture, which resembles the bang-bang rewarded SPG method BID8 where the name bang-bang refers to a system that switches abruptly between extreme states (i.e., the z values). Finally, similar to BID8, we adopt the token-level formulation (Eq.4) and associate each token with a separate variable z. We provide the pseudo-code of the interpolation algorithm in the supplements. It is notable that Ranzato et al. FORMULA0 also develop an annealing strategy that mixes MLE and policy gradient training. As discussed in section 3 and the supplements, the algorithm can be seen as a special instance of the ERPO framework (with moderate approximation) we have presented. Next section shows improved performance of the proposed, more general algorithm compared to BID24. We evaluate the above interpolation algorithm in the tasks of machine translation and text summarization. The proposed algorithm consistently improves over a variety of previous methods. Code will be released upon acceptance. Setup In both tasks, we follow previous work BID24 and use an attentional sequence-to-sequence model BID19 where both the encoder and decoder are single-layer LSTM recurrent networks. The dimensions of word embedding, RNN hidden state, and attention are all set to 256. We apply dropout of rate 0.2 on the recurrent hidden state. We use Adam optimization for training, with an initial learning rate of 0.001 and batch size of 64. At test time, we use beam search decoding with a beam width of 5. Please see the supplementary materials for more configuration details. Dataset Our dataset is based on the common IWSLT 2014 BID5 German-English machine translation data, as also used in previous evaluation BID24. After proper pre-processing as described in the supplementary materials, we obtain the final dataset with train/dev/test size of around 146K/7K/7K, respectively. The vocabulary sizes of German and English are around 32K and 23K, respectively. Results The BLEU metric BID22 ) is used as the reward and for evaluation. Table 1 shows the test-set BLEU scores of various methods. Besides the approaches described above, we also compare with the Scheduled Sampling method which combats the exposure bias by feeding model predictions at randomly-picked decoding steps during training. From the table, we can see the various approaches such as RAML provide improved performance over the vanilla MLE, as more sufficient exploration is made at training time. Our proposed new algorithm performs best, as it interpolates among the existing algorithms to gradually increase the exploration space and solve the generation problem better. FIG2 shows the test-set BLEU scores against the training steps. We can see that, with annealing, our algorithm improves BLEU smoothly, and surpasses other algorithms to converge at a better point. Table 2: Results of text summarization. Dataset We use the popular English Gigaword corpus BID9 for text summarization, and pre-processed the data following BID25. The ing dataset consists of 200K/8K/2K source-target pairs in train/dev/test sets, respectively. More details are included in the supplements. Results The ROUGE metrics (including -1, -2, and -L) BID18 are the most commonly used metrics for text summarization. Following previous work BID8, we use the summation of the three ROUGE metrics as the reward in the learning algorithms. Table 2 show the on the test set. The proposed interpolation algorithm achieves the best performance on all the three metrics. For easier comparison, FIG3 shows the improvement of each algorithm compared to MLE in terms of ROUGE-L. The RAML algorithm, which performed well in machine translation, falls behind other algorithms in text summarization. In contrast, our method consistently provides the best . We have presented a unified perspective of a variety of well-used learning algorithms for sequence generation. The framework is based on a generalized entropy regularized policy optimization formulation, and we show these algorithms are mathematically equivalent to specifying certain hyperparameter configurations in the framework. The new principled treatment provides systematic understanding and comparison among the algorithms, and inspires further enhancement. The proposed interpolation algorithm shows consistent improvement in machine translation and text summarization. We would be excited to extend the framework to other settings such as robotics and game environments. A POLICY GRADIENT & MIXER BID24 made an early attempt to address the exposure bias problem by exploiting the policy gradient algorithm BID29. Policy gradient aims to maximizes the expected reward: DISPLAYFORM0 where R P G is usually a common reward function (e.g., BLEU). Taking gradient w.r.t θ gives: DISPLAYFORM1 We now reveal the relation between the ERPO framework we present and the policy gradient algorithm. Starting from the M-step of Eq. and setting (α = 1, β = 0) as in SPG (section 3.4), we use p θ n as the proposal distribution and obtain the importance sampling estimate of the gradient (we omit the superscript n for notation simplicity): DISPLAYFORM2 where Z θ = y exp{log p θ + R} is the normalization constant of q, which can be considered as adjusting the step size of gradient descent. We can see that Eq. recovers Eq. if we further set R = log R P G, and omit the scaling factor Z θ. In other words, policy gradient can be seen as a special instance of the general ERPO framework with (R = log R P G, α = 1, β = 0) and with Z θ omitted. The MIXER algorithm BID24 incorporates an annealing strategy that mixes between MLE and policy gradient training. Specifically, given a ground-truth example y *, the first m tokens y * 1:m are used for evaluating MLE loss, and starting from step m + 1, policy gradient objective is used. The m value decreases as training proceeds. With the relation between policy gradient and ERPO as established above, MIXER can be seen as a specific instance of the proposed interpolation algorithm (section 4) that follows a restricted annealing strategy for token-level hyperparameters (λ 1, λ 2, λ 3). That is, for t < m in Eq.4 (i.e.,the first m steps), (λ 1, λ 2, λ 3) is set to and c = 1, namely the MLE training; while for t > m, (λ 1, λ 2, λ 3) is set to (0.5, 0.5, 0) and c = 2. Algorithm 1 summarizes the interpolation algorithm described in section 4. Get training example (x, y *) for t = 0, 1,..., T do 5: DISPLAYFORM0 if z = 1 then Sample token y t ∼ exp{c · log p θ (y t |y 1:t−1, x)} 8: DISPLAYFORM1 Sample token y t ∼ exp{c · ∆R comm (y t |y 1:t−1, y *)} 10: DISPLAYFORM2 Sample token y t ∼ exp{c · ∆R δ}, i.e., set y t = y * t end if end for Update θ by maximizing the log-likelihood log p θ (y|x) Anneal λ by increasing λ 1 and λ 2 and decreasing λ 3 16: until convergence C EXPERIMENTAL SETTINGS C.1 DATA PRE-PROCESSING For the machine translation dataset, we follow BID20 for data pre-processing. In text summarization, we sampled 200K out of the 3.8M pre-processed training examples provided by BID25 for the sake of training efficiency. We used the refined validation and test sets provided by BID35. For RAML, we use the sampling approach (n-gram replacement) by BID20 to sample from the exponentiated reward distribution. For each training example we draw 10 samples. The softmax temperature is set to τ = 0.4.For Scheduled Sampling, the decay function we used is inverse-sigmoid decay. The probability of sampling from model i = k/(k + exp (i/k)), where k is a hyperparameter controlling the speed of convergence, which is set to 500 and 600 in the machine translation and text summarization tasks, respectively. For MIXER BID24, the advantage function we used for policy gradient is R(y 1:T |y *)− R(y 1:m |y *).For the proposed interpolation algorithm, we initialize the weights as (λ 1, λ 2, λ 3) = (0.04, 0, 0.96), and increase λ 1 and λ 2 while decreasing λ 3 every time when the validation-set reward decreases. Specifically, we increase λ 1 by 0.06 once and increase λ 2 by 0.06 for four times, periodically. For example, at the first time the validation-set reward decreases, we increase λ 1, and at the second to fifth time, we increase λ 2, and so forth. The weight λ 3 is decreased by 0.06 every time we increase either λ 1 or λ 2. Notice that we would not update θ when the validation-set reward decreases. Here we present additional of machine translation using a dropout rate of 0.3 (Table 3). The improvement of the proposed interpolation algorithm over the baselines is comparable to that of using dropout 0.2 (Table 1 in the paper). For example, our algorithm improves over MLE by 1.5 BLEU points, and improves over the second best performing method RAML by 0.49 BLEU points. (With dropout 0.2 in Table 1, the improvements are 1.42 BLEU and 0.64, respectively.) We tested with dropout 0.5 and obtained similar . The proposed interpolation algorithm outperforms existing approaches with a clear margin. FIG6 shows the convergence curves of the comparison algorithms. Model BLEU MLE 26.63 ± 0.11 RAML 27.64 ± 0.09 SPG BID8 26.89 ± 0.06 MIXER BID24 27.00 ± 0.13 Scheduled Sampling 27.03 ± 0.15 Ours 28.13 ± 0.12 Table 3: Results of machine translation when dropout is 0.3. Table 3 ) picked according to the validation set performance. | A unified perspective of various learning algorithms for sequence generation, such as MLE, RL, RAML, data noising, etc. | 697 | scitldr |
We are reporting the SHINRA project, a project for structuring Wikipedia with collaborative construction scheme. The goal of the project is to create a huge and well-structured knowledge base to be used in NLP applications, such as QA, Dialogue systems and explainable NLP systems. It is created based on a scheme of ”Resource by Collaborative Contribution (RbCC)”. We conducted a shared task of structuring Wikipedia, and at the same, submitted are used to construct a knowledge base. There are machine readable knowledge bases such as CYC, DBpedia, YAGO, Freebase Wikidata and so on, but each of them has problems to be solved. CYC has a coverage problem, and others have a coherence problem due to the fact that these are based on Wikipedia and/or created by many but inherently incoherent crowd workers. In order to solve the later problem, we started a project for structuring Wikipedia using automatic knowledge base construction shared-task. The automatic knowledge base construction shared-tasks have been popular and well studied for decades. However, these tasks are designed only to compare the performances of different systems, and to find which system ranks the best on limited test data. The of the participated systems are not shared and the systems may be abandoned once the task is over. We believe this situation can be improved by the following changes: 1. designing the shared-task to construct knowledge base rather than evaluating only limited test data 2. making the outputs of all the systems open to public so that we can run ensemble learning to create the better than the best systems 3. repeating the task so that we can run the task with the larger and better training data from the output of the previous task (bootstrapping and active learning) We conducted “SHINRA2018” with the above mentioned scheme and in this paper we report the and the future directions of the project. The task is to extract the values of the pre-defined attributes from Wikipedia pages. We have categorized most of the entities in Japanese Wikipedia (namely 730 thousand entities) into the 200 ENE categories. Based on this data, the shared-task is to extract the values of the attributes from Wikipedia pages. We gave out the 600 training data and the participants are required to submit the attribute-values for all remaining entities of the same category type. Then 100 data out of them for each category are used to evaluate the system output in the shared-task. We conducted a preliminary ensemble learning on the outputs and found 15 F1 score improvement on a category and the average of 8 F1 score improvements on all 5 categories we tested over a strong baseline. Based on this promising , we decided to conduct three tasks in 2019; multi-lingual categorization task (ML), extraction for the same 5 categories in Japanese with a larger training data (JP-5) and extraction for 34 new categories in Japanese (JP-34). Based on "Resource by Collaborative Contribution (RbCC)" scheme, we conducted a shared task for structuring Wikipedia for the purpose of attracting participants, but at the same, submitted are used to construct a knowledge base. There have been a lot of shared-tasks, but the of the participated systems are not well used. These , which we believe are the great resources, are abandoned after the evaluation is over. We conducted a project which is a shared-task on AKBC, but at the same time the of the systems are gathered and used to produce even the better than the best participated system by ensemble learning. One of the trick is that we, the organizers, don't notify which is the test data to the participants. So they have to run their systems on all entities in Wikpedia and submit them, even though the evaluation are reported only on 100 test data among the entire data. By this methods, the organizers will receive the structured information for all entities from the participants. The accumulated are later become open to the public, and will be used to build the even better structured knowledge by the ensemble learning methods, for example. In other words, this is a project to use AKBC systems as a tool to construct a huge and well-structured Knowledge Base in collaborative manner. We will report "SHINRA2018" project which runs under the RbCC scheme. The task is the attribute extraction task, i.e. to extract the values of the attributes from Wikipedia pages. We have categorized most of Japanese Wikipedia entities (namely 730 thousand entities) into the 200 Extended Named Entity (ENE) categories prior to this project. Based on this data, the shared-task is to extract the values of the attributes which are defined for each category from texts and infobox in the Wikipedia pages. We gave out the 600 training data and the participants are required to submit the attribute-values for all remaining entities of the same category. Then 100 data, which is hidden to the participants for each category are used to evaluate the system as a shared-task. To this project, 8 groups submitted the based on 15 systems. We conducted a preliminary ensemble learning on the outputs in order to demonstrate how the RbCC scheme works. The of the ensemble learning shows a huge improvement over the best single system for each category. The improvements are 15 F-score on a category of "airport" in which the best system achieves 72 F-score, and 9 F-score on the average. This show that the RbCC scheme is very effective. Wikipedia is a great resource as a knowledge base of the entities in the world. However, Wikipedia is created for human to read rather than machines to process. Our goal is to transform the current Wikipedia to a machine readable format based on a clean structure. There are several machine readable knowledge bases (KB) such as CYC BID4, DBpedia BID3, YAGO BID7, Freebase BID0, Wikidata BID13 and so on, but each of them has problems to be solved. CYC has a coverage problem, and others have a coherence problem due to the fact that these are based on Wikipedia and/or created by many but inherently incoherent crowd workers. In order to solve these problems, we started a project for structuring Wikipedia using automatic knowledge base construction (AKBC) shared-task using a cleaner ontology definition. The automatic knowledge base construction shared-tasks have been popular for decades. In particular, there are popular shared-tasks in the field of Information Extraction, Knowledge Base population and attribute extraction, such as KBP[U.S. National Institute of Standards and Technology (NIST), 2018] and CoNLL. However, most of these tasks are designed only to compare the performances of participated systems, and to find which system ranks the best on limited test data. The outputs of the participated systems are not shared and the and the systems may be abandoned once the evaluation task is over. We believe this situation can be improved by the following changes:1. designing the shared-task to construct knowledge base rather than only evaluating on limited test data 2. making the outputs of all the systems open to public so that anyone can run ensemble learning to create the better than the best single system 3. repeating the task so that we can run the task with the larger and better training data from the output of the previous task (active learning and bootstrapping)We conducted "SHINRA2018" with the aforementioned ideas, we call it "Resource by Collaborative Contribution (RbCC)". In this paper we report the first and the future directions of the project. The task is to extract the values of the pre-defined attributes from Wikipedia entity pages. We used Extended Named Entity (ENE) as the definition of the category (in total 200 categories in the ontology) and the attributes (average of 20 attributes) for each category. We have categorized most of the entities in Japanese Wikipedia (namely 730 thousand entities) into the ENE categories prior to this project. Based on this data, the sharedtask is to extract values of the attributes defined for the category of each entity. At the SHINRA2018 project, we limited the target categories to 5, namely, person, company, city, airport and chemical compound. We gave out the 600 training data each for 5 categories at and the participants are supposed to submit the attribute-values for all remaining entities of the categories in Japanese Wikipedia. Then 100 data out of the entire pages of the category are used at the evaluation of the participated systems in the shared-task. For example, there are about 200K person entities in Japanese Wikipedia, and the participants have to extract the attribute-values, such as "birthday", "the organizations he/she have belonged", "mentor" and "awards" from all the remaining entities (i.e. 199.4K = 200K-600 entities). Before starting the project, the participants signed the contract that all the output will be shared among all participants, so that anyone can conduct the ensemble learning on those outputs, and hence create a better knowledge base than the best system in the task. Note that, for the sake of participant's interest, i.e. a company may want to keep the system as their property, the outputs are required to be shared, but their systems are not necessarily to be shared. A promising of the ensemble learning is achieved and we envision that it will lead to the cleaner machine readable knowledge base construction. Structured knowledge bases have considered as one of the most important knowledge resources in the fields of Natural Language Processing. There are several major projects targeted to construct structured knowledge bases in the past. One of the earliest project is CYC, and more recently there are Wikipedia based projects such as DBpedia, Yago, Freebase and Wikidata. Moreover, there are some shared-tasks aiming to build techniques for knowledge base structuring such as KBP and CoNLL. We will introduce these resources and projects and describe the points we consider as issues to be solved in those projects. CYC ontology is a large knowledge base constructed as common sense knowledge BID4. This is one of the large projects in the AI in 80-90's, which use the human labor to construct knowledge base. The cost of construction and maintenance of the handmade knowledge bases for the general domain is very high, and it is known that the knowledge bases have problems in the coverage and the consistency. DBpedia is a more recent project to construct a structured information from the semistructured data in Wikipedia such as infoboxes or categories BID3. DBpedia also has a problem of accuracy, coverage, and coherence. Like CYC, it is also created by human, but in this case, those who worked on creating the knowledge are non-experts of ontology. For example, "Shinjuku Station", which is a railway station, has a category "Odakyu Electric Railway", which is a railway company using the station. Of course, a station can't be an instance of a railway company, so this is not appropriate category. There are so many examples like this in DBpedia. Also there are many inconsistencies in the category hierarchy, and the attributes defined in the KB are not well organized in many categories. Yet Another Greater Ontology (YAGO) is a ontology constructed by mapping Wikipedia articles to the WordNet synsets BID7. YAGO has adopted attributes information extracted from infoboxes like as DBpedia because no attributes defined in WordNet synsets. Freebase is a project to construct a structured knowledge base by crowdsourcing, same as Wikipedia BID0. However, by the crowdsource approach, Freebase doesn't have a well-organized ontology. It has many noises and lack of coherence because these were created by unorganized crowds. Currently, the project of Freebase has paused and integrated into Wikidata. Wikidata is aiming to be a structured knowledge base based on corwdsourcing scheme. BID13. Wikidata also have noises and lack of coherence because it has constructed by bottom-up approach same as Wikipedia and Freebase. For example, just comparing the definition of "city", "town" and "human settlements", we can easily observe inconsistencies in the property (the number of properties are 30, 0 and 6, respectively), there are very biased properties such as "Danish urban area code" in "human settlements", there are many related ambiguous entities, such as "like a city", "city/town" and so on. Also, the category inconsistency can be easily found, for example, "city museum", "mayor" are subcategory of "city", although a mayor is not an instance of "city". Wikipedia allows topics to be included in a category, however, this policy prevents to make the category hierarchy as a well-designed ontology. KBP is a shared task organized by NIST for establishing a technology to construct a structured knowledge base from non-structured documents[U.S. National Institute of Standards and Technology (NIST), 2018]. KBP mainly consists of two tasks. One task is an Entity Discovery and Linking (EDL) which is to find and identify an entity defined in DB from documents. Another one is a Slot Filling which is to extract attribute information of the entity. KBP in general is limited entity types to Person, Location, and Organization in contrast to Wikipedia's wide coverage, and mostly it is a competition based project and no resource creation purpose. FIne Grained Entity Recognition (FIGER) is a project to identify 112 types of named entity classes that are finely defined, such as ENE, from documents BID6. The category of FIGER seems a bit biased, and it doesn't have attribute definitions for each category. In order to create structured knowledge base which is useful for NLP applications, we have learned that well structured ontology is needed and it has to be designed top-down manner. DBpedia, Freebase and Wikidata are created by crowds in bottom up manner,and these have inconsistent entries, imbalanced ontologies and adhoc attributes, as we described in the previous section. We believe the major cause is the fact that these are created bottomup manner, and a top-down design is essential to design the ontology and the attributes. As a top-down designed ontology for named entities, we employed the "Extended Named Entity (ENE) hierarchy" in our project. Extended Named Entity (ENE) is a named entity classification hierarchy along with the attribute definition for each category BID9 BID10. It includes fine-grained 200 categories of entities in hierarchy of up to 4 layers. It contains not only the finer categories of the typical NE categories, such as "city" and "lake for "location" or "company" for "organization", but also contains new named entity types such as "products", "event", "position" and so on. These categories are designed to cover a large amount of entities in the world using encyclopedia and many other resources. Figure 1 shows the ENE definition, version 7.1.0. Attributes are defined based on the investigation of the entities in each category. For example, the attributes for "airport" categories are as follows: "Reading", "IATA code", "ICAO code", "nickname", Figure 1: Definition of Extended Named Entity Hierarchy "name origin", "number of users per year", "the year of the statistic", "the number of airplane landing per year", "longitude", "latitude", "location", "old name", "elevation", "big city near by", "number of runaway" and so on. Please refer to the HP for the complete definition. In order to conduct the shared-task of the attribute-value extraction on Wikipedia entities, first, we have to assign one or more categories to each Wikipedia entity. For example, we have to know the Wikipedia page of "Chicago O'Hare Airport" belongs to an airport entity, and we supposed to extract attribute-value of airport from the page. We have annotated one or more of 200 ENE categories to 782,406 entities of Japanese Wikipedia (201711 version) prior to this project. At the annotation, we have excluded the less popular entities which have less than 5 incoming links (151K entities), and non-entity pages (about 53K pages) such as common nouns and simple numbers. This annotation was done by Machine learning method followed by hand checking on less reliable ones. The machine learning BID8 was conducted with 20K training data created by hand. Then a human check was conducted on the machine learning outputs with less reliable scores. In order to see the quality of the categorization, we evaluate a sample data by multiple annotators to see the accuracy of the data and we observed the accuracy of the categorization is 98.5%. The remaining 1.5% are those which are ambiguous in nature and are very difficult even for the human annotators. TAB0 show the most frequent categories with the frequencies in the data. In this section, we will describe the definition of the shared-task. The task is to extract the attribute-values of entities from the Wikipedia page. For this year's task (SHINRA2018), we picked 5 categories, namely "person", "city", "company", "airport" and "chemical compound" for the shared-task. This selection was done on the largest subcategories of "person", "location", "organization", which are the traditional three categories of named entity (person has no subcategories, and itself is the only category in ENE) "Airport" is selected as a category which has well-structured infobox in Wikipedia, on the other hand, "chemical compound" is selected because the information in infobox is not quite satisfactory for NLP purpose. The infobox for "Chemical compound" contains the factual information such as "boiling temperature", or "chemical formula", but it doesn't contain information such as "usage" or "production method", which we believe are important attributes, for example, for QA purpose. We gave out 600 training data for each category. In the training data, all attributevalues mentioned in the Wikipedia page are manually extracted and form the training data in JSON format. The participants also received the list of all entities, i.e. Wikipedia pages, for 5 categories, and they are required to extract attribute-values from Wikipedia pages of all entities. The evaluation is conducted on 100 entities for each category, but the participants are not notified which 100 entities are used for evaluation even after the evaluation is over. This is for the purpose of the data construction so that the participants have to do their best to produce the output for all data, and the purpose of the future comparison (if the test data is known, the participants could tune their system to the test data even unintentionally through a number of experiments). The are reported by precision, recall and F1 scores, as usual. The systems submitted before the deadline are reported as the formal and the submitted after the deadline are reported as a reference . In the ensemble learning experiment which will be reported later in this paper, we use all the regardless of the formal or reference so that we can achieve the best for the resource construction purpose. The participants are not required to submit the for all 5 categories, as some participants might be interested in a particular category or may have smaller machine resources to run their system for all the categories. The manual creation of the training and test data was not easy. In this section, we briefly described the data preparation, as this process is very interesting and could be a topic of one another paper by itself. We tried to use three types of annotators to create the data as a preliminary experiment.• Experts of the construction of linguistic data• Students who are supervised by experts• Workers on the crowdsourcing (Lancers)As we can imagine intuitively, we found out that the upper in the list, the more expensive, but at the same time the more accurate. Also, we found that the crowdsourcing has relatively high coverage based on our strategy of the crowdsourcing. The task of crowdsourcing is designed with three stages. The first stage is to identify the sections where the given attribute-value is written. In this stage, even the worker find the value in the page, they are not requested to extract the value. This identification of the sections will be repeated until two workers found no value is found, because some attributes have multiple values in one page. Then in the second stage, the values are extracted from the sections which are identified to contain the value(s). The final stage is to check if the extracted value is really the value for the attribute. Maybe this careful strategy might lead to produce the relatively high coverage. Based on the preliminary annotation experiments, we decided to use "expert" and "crowd" at the final data creation. The first round annotation is done by both "expert" and "crowd" independently for the same attributes, and then both are merged to create the final annotation by another "expert" (different from the one who annotate it initialy). The inter-annotator agreements between "crowd" and "experts" are 60-80% and that between "experts" are 80-90% depending on the attributes. The coverage by "crowds" is relatively high and it suggest missing information by the first "expert" at the final annotation by the "expert". In this section, we will report the of the shared-task. Five months are given to the participants to develop their systems and run their experiments from April to September 2018. 16 systems by 8 participants are submitted at SHINRA2018. The first two columns in TAB1 shows the participants (some in abbreviations) and their methods. Here "pattern" means that they created a hand-made patterns for the attribute-value extraction, and "DL" means some sort of "Deep Learning". "DrQA" is an open source QA system adapted Japanese QA system. In this system, the participant transformed the infobox into a sentence by pattern, e.g. "The birthday of Barack Obama is August 4, 1961" and attribute-values to be extracted is transformed to a question, e.g. "Who is the father of Barack Obama?" in order to extract "father" attribute-value of "Barack Obama" entity. Then they train and run DrQA for all categories together. For RbCC purpose, it is quite valuable to have technologies of wide variety used in this shared-task. The are shown in 3rd to 7th columns in the same table. The top is shown in bold for each category. The Unisys's DrQA system performs the best in three categories, most of which don't have so much information in infobox. As their method handles all the attributes in a single system (regardless of infobox or in the explanation sentence), the amount of training data for the system becomes relatively larger and it may receive the benefit in training data size at the situation where the training data is relatively small. TUT's pattern based system performed very well on airport category, in which the most of the required information are described in infobox, and practically only one infobox template is used in the category. Note that the category "person" has many different infobox templates depending on the vocation of the person, and the company's infobox templates vary depending on the type of the company. The goal of "Resource by Collaborative Contribution (RbCC)" is to produce more accurate KB than the KB created by the best single system. In order to see if RbCC scheme is practical and promising, we conducted a preliminary experiment of ensemble learning on the all system's outputs. Note that the ensemble learning is conducted in order to show an evidence if RbCC scheme is practical and promising. In the past, the ensemble learning methods have been studied with various ideas; such as Bagging BID5, Boosting BID2 or stacking BID15, BID1, BID11 ]. These methods are generally used to create a high accuracy system combining more than one ML systems. However, in our situation, the outputs of many systems are given and the objective is to produce the best output out of the system outputs by ensemble learning method. Because of this, the stacking method is best suitable for our purpose, but, first, we tried two simpler methods, i.e. the simple voting method and the weighted voting method base on the accuracy on held-out data as a preliminary ensemble learning experiment. First, we will explain the simple voting method. Assume there are n systems which outputs value v for an attribute of an entity. Then the value v receives score n. Separately, we compute the threshold t by maximizing the accuracy on the held-out data. If n > t, then we take the value v as the output of the ensemble system. In practice, we split the actual test data, which contains 100 samples, into two halves; one for the held-out data and the other for the test data for this experiment. We conducted the same experiment replacing the held-out data and test data; i.e. we conducted a 2 fold cross-validation experiment on 100 test data. The other method is the weighted ensemble method. We weighted the vote of the system by the accuracy of the system on the held-out data. Instead of a sum of the number of the systems which produce attribute-value v, we compute the sum of the accuracy as the score for the value v. The way to define the threshold and the cross-validation mechanism are the same to those of the simple voting method. We will show the precision, recall and F1 score of the baseline and the ensemble methods in Table 8. Also the relative improvement of those two methods compared to the baseline method is shown in FIG0. The baseline method is constructed by combining the best system outputs for each category, i.e. TUT system for "airport", AIP system for "city" and Unisys system for the rest, which is better than a single system, e.g. Unisys, though. We can observe from the table and the graph that the two voting methods performs better than the baseline methods in F1 score. Also the weighted voting method performs better than the simple voting method. The improvement exceed 15 F-score on "airport" category and 4 F-score for all categories. The average improvement is 10.4 F-score. The show the effectiveness of the ensemble learning methods and are the evidence that "Resource by Collaborative Contribution" scheme is promising and encouraging. Based on the success of the SHINRA2018 project, we decided to continue this project as SHINRA2019. We are planning to conduct three tasks as follows:• ML: Multi-lingual categorization task The multi-lingual task is to expand the benefit of RbCC to the knowledge base resources in languages other than Japanese. We are planning to run it on 9 languages with the largest numbers of "active users"; namely English, Spanish, French, German, Chinese, Russian, Portuguese, Italian and Arabic BID14. Actually, Japanese is the 10th ranked language on the measure, so Wikipedias of these 9 language have more users than that of Japanese. As we don't have the category information for those 9 language Wikipedia entities, the first task is to categorize the entities. For Japanese, we annotated 20K entities as the training data for the categorization, but now we have most of the Japanese entities categorized, we can utilize this information. There are links between equivalent Wikipedia entities in different languages. For example, we observed there are about 500K entities links from Japanese to English among 720K entities already categorized in Japanese. We can use them as the training data to categorize English entities. Likewise there are language links to other 8 language Wikipedias from Japanese Wikipedia, although the number of linked entities are much smaller and some noise may exist, the participants can use much bigger training data than that for the initial Japanese categorization experiment. As there are links between the Wikipedias of other languages and possibly different types of infoboxes exist in other language, too, the participants have a lot of information to be used in the categorization task. JP-5 is the task to extract the attribute-values for the same 5 categories in SHINRA2018; namely "person", "company", "city", "airport" and "chemical compound". At SHINRA2018, the values are prepared without contexts. In other words even there are more than one mention of a particular attribute, we didn't give out which one is the mention to that value. For example, assume the nationality for a person is "Japan", but the same string may be mentioned in the same person page but not necessarily be meant to indicate the nationality of the person, e.g. "He left Japan", we had no means to know that the context is not for the nationality. It is similar to the situation of the distant-supervision, so it is difficult to extract only the context of nationality. At SHINRA2019, we will annotate the attributevalue in the text, so that the exact context for the value can be extracted. We are also planning to expand the size of the training data from 600 to 1500, at least for the categories "person", "company" and "city" using the output of the ensemble system. This forms a bootstrapping scheme as the project year by year. JP-30 is the task to extract attribute-value for 30 new categories. As we mentioned in the previous section, creating the data is laborious, the size of the training data will be very small, namely 100. However the categories to be tested will be very close; 7 subcategories of Geographical Political Entities (GPE) such as country, prefecture/state and county, 8 subcategories of terrain such as mountain, island, river, lake and ocean, and organizational entities such as international organization, political organization, ethnic group and nationality. Although the number of training data is much smaller, we chose the very similar types and the similar attributes may exists. Some techniques of machine learning with adaptation might help creating a good . We expect to build a larger training data using bootstrapping scheme, just like JP-5 at SHINRA2018 and SHINRA2019.We hope to have many participants so that the better can be achieved by the ensemble learning methods to all three tasks. We proposed a scheme of knowledge base creation: "Resource by Collaborative Contribution". We conducted the Japanese Wikipedia structuring project, SHINRA2018, based on that scheme. Based on Extended Named Entity, the top-down definition of categories and attributed for named entities, the task is to extract the attribute-values from Japanese Wikipedia pages. 8 groups participated to the task, and the ensemble learning shows that the RbCC scheme is practical and promising. A quite big improvement over the the best single system was achieved on "airport" category (more than 15 F-score), and the average of 8 F-score improvement was achieved using the weighted voting methods. We are planning to conduct SHINRA2019 based on the RbCC scheme on 3 tasks. These are the multi-lingual categorization, the extraction of attribute-value on the same 5 categories, and the extraction of attribute-values on 30 new categories in Japanese. We'd like to express our deep appreciation to all the participants and collaborators who helped this project. Without the participation, we couldn't even try the ensemble learning and achieve the goal. We are hoping to expand and spread the idea of RbCC scheme, not only limited to this kind of task and resource. | We introduce a "Resource by Collaborative Construction" scheme to create KB, structured Wikipedia | 698 | scitldr |
Recent image super-resolution(SR) studies leverage very deep convolutional neural networks and the rich hierarchical features they offered, which leads to better reconstruction performance than conventional methods. However, the small receptive fields in the up-sampling and reconstruction process of those models stop them to take full advantage of global contextual information. This causes problems for further performance improvement. In this paper, inspired by image reconstruction principles of human visual system, we propose an image super-resolution global reasoning network (SRGRN) to effectively learn the correlations between different regions of an image, through global reasoning. Specifically, we propose global reasoning up-sampling module (GRUM) and global reasoning reconstruction block (GRRB). They construct a graph model to perform relation reasoning on regions of low resolution (LR) images. They aim to reason the interactions between different regions in the up-sampling and reconstruction process and thus leverage more contextual information to generate accurate details. Our proposed SRGRN are more robust and can handle low resolution images that are corrupted by multiple types of degradation. Extensive experiments on different benchmark data-sets show that our model outperforms other state-of-the-art methods. Also our model is lightweight and consumes less computing power, which makes it very suitable for real life deployment. Image Super-Resolution (SR) aims to reconstruct an accurate high-resolution (HR) image given its low-resolution (LR) counterpart. It is a typical ill-posed problem, since the LR to HR mapping is highly uncertain. In order to solve this problem, a large number of methods have been proposed, including interpolation-based (Zhang & Wu., 2006), reconstruction-based , and learning-based methods (; Peleg & Elad., 2014; ; ; ; ; a;). In recent years, deep learning based methods have achieved outstanding performance in superresolution reconstruction. Some effective residual or dense blocks b;;; Ahn et al.; ) have been proposed to make the network wider and deeper and achieved better . However, they only pay close attention to improving the feature extraction module, ignoring that the upsampling process with smaller receptive fields does not make full use of those extracted features. Small convolution receptive field means that the upsampling process can only perform super-resolution reconstruction based on local feature relationships in LR. As we all know, different features interact with each other, and features which are in different regions have corresponding effects on upsampling and reconstruction of a certain region. That is to say that a lot of information is lost in the process of upsampling and reconstruction due to the limitation of the receptive field, although the network extracts a large number of hierarchical features which are from low frequency to high frequency. Chariker et al. (2016; show that the brain generates the images we see based on a small amount of information observed by the human eye, ranther than acquiring the complete data from the point-by-point scan of the retina. This process of generating an image is similar to a SR process. According to their thought, we add global information in SR reconstruction and propose to use relational reasoning to implement the process that the human visual system reconstructs images with observed global information. In general, extracting global information requires a large receptive field. A large convolution receptive field usually requires stacking a large number of convolutional layers, but this method does not work in the upsampling and reconstruction process. Because this will produce a huge number of parameters. Based on the above analysis, we propose an image super-resolution global reasoning network (SR-GRN) which introduces the global reasoning mechanism to the upsampling module and the reconstruction layer. The model can capture the relationship between disjoint features of the image with a small respective field, thereby fully exploits global information as a reference for upsampling and reconstruction. We mainly propose global reasoning upsampling module (GRUM) and global reasoning reconstruction block (GRRB) as the core structure of the network. GRUM and GRRB first convert the LR feature map into N nodes, each of which not only represents a feature region in the LR image, but also contains the influence of pixels in other regions on this feature. Then they learn the relationship between the nodes and fuse the information of each node in a global scope. After that, GRUM learns the relationship between the channels in each node and amplifies the number of channels for the upsampling process. And then they convert N nodes into pixels with global reasoning information. Finally, GRUM and GRRB complete the upsampling and reconstruction process respectively. In general, our work mainly has the following three contributions: • We propose an image super-resolution global reasoning network (SRGRN) which draws on the idea of image reconstruction principles of human visual system. We mainly focus on the upsampling module and the reconstruction module. The model reconstructs SR images based on relational reasoning in a global scope. • We propose a global reasoning upsampling module (GRUM) and global reasoning reconstruction block (GRRB), which construct a graph model to implement the relational reasoning among the feature regions in an image via 1D and 2D convolution, and finally adds the information obtained by global reasoning to each pixel. It can provide more contextual information to help generate more accurate details. • Our proposed GRUM and GRRB are lightweight, which makes it suitable for real life deployment. More importantly, GRUM and GRRB balance the number of parameters and the reconstruction performance well. They can be easily inserted into other models. Deep CNN for SR and upsampling methods. Deep learning has achieved excellent performance in image super-resolution tasks. For the first time, Dong et al. applied convolutional neural networks to image SR. After this, Kim et al. proposed VDSR and DRCN which introduced residual learning to make the network depth reach 20 layers achieved significant improvement. And then more and more researchers are starting to pay attention to the improvement of the network feature extraction part. proposed EDSR and MDSR, which introduce residual scaling and remove unnecessary modules from the residual block. Concerned that the previous models only adopt the feather of the last CNN, Zhang et al. (2018b) proposed residual dense network to make full use of hierarchical features from each Conv layer. The above and most of the subsequent networks implement the upsampling based on either transposed convolution (; Zeiler & Fergus., 2014) or sub-pixel convolution . Although these models have achieved good , there exists a problem that these upsampling methods have only a small receptive field. This means that upsampling can only take advantage of contextual information within a small area. Recently, researchers propose some new super-resolution upsampling process. LapSRN allows low-resolution images to be directly input into the network for step-by-step amplification. exploit iterative up-and-down sampling layers and propose DBPN. further explore the application of feedback mechanism (weight sharing) in SR and propose the SRFBN. These models have achieved a better reconstruction performance. However, the Conv layers in these upsampling modules still have only a small receptive field. Global reasoning machansim. Recently, graph-based deep learning methods have begun to be widely used to solve relation reasoning. Santoro et al. propose Relation Networks (RN) to solve problems that depend on relational reasoning. propose SIN, which implement a object detection using a graph model for structure inference. Furthermore, model a Global Reasoning unit that consists of five convolutions for image classification, semantic segmentation and video action recognition task. Considering that the human visual system generates images based on the observed global information is also a reasoning process. Moreover, correlation between feature regions can be obtained through relational reasoning, which makes each pixel in the generated SR image jointly determined by the information in a global scope. Therefore, we propose a global reasoning network for SR. We will detail our SRGRN in next section. According to Chariker et al. (2016;, there is only little information transmitted from the retina to the visual cortex, and then the brain will reconstruct the real-world images based on the information received. We regard it as a reasoning process in a global scope. For image SR, the upsampling module constructs SR images base on features in LR images, which is substantially similar to detecting the category of each pixel of a SR image and generating these pixels based on contextual information of corresponding LR image. Due to the limitation of the convolution receptive field, only a small amount of contextual information can be utilized to generate HR images in most other models. This leads that many details in the HR image are not fine. Similarly, the above problem also exists in the reconstruction process. To solve these problems, we simulate the reasoning process that exists in human visual system, and then propose SRGRN to make full use of the contextual information to recover accurate details, which is achieved by constructing graph model and reasoning the relationship between these regions in an image. Figure 1, our SRGRN includes feature extraction part, global reasoning upsample module(GRUM) and global reasoning reconstruction block(GRRB). Let's denote I LR and I SR as the input and output of SRGRN. The feature extraction part can use the relevant architecture of most other models. Here we introduce the feature extraction part of the RDN (b) as an example. where H F EX (·) denotes a series of operations of feature extraction part. As with the previous work , the number of GRUM depends on the scaling factor, The GRUM receives F L as input. F S represents the output of the GRUM. GRUM can be expressed by the following mathematical formula: where H GRU M (·)denotes a series of operations of GRUM. More details about GRUM will be given in Section 3.2. We further conduct global reasoning reconstruction block(GRRB) to utilize the global contextual information to generate the output image. GRRB can be expressed by the following mathematical formula: where H GRRB (·) denotes a series of operations of GRRB. More details about GRRB will be given in Section 3.3. After the above operations, we get the corresponding SR image. In this section, we present details about our proposed global reasoning upsample module(GRUM) in Figure 2. In order to help achieve relation reasoning, we map each image to a graph model We first need a function to construct N nodes in the oriented graph, each of which represents a region in the image. In GRUM, we obtain relationship weights between these pixels through a 2D 1 × 1 convolution, and then convert the input F L into N nodes via element-wise product. The benefits of this approach are mainly reflected in the following aspects: It can not only aggregates a feature region of the input F L into a node, but also dig out the influence of other pixels in the image on this region. This is equivalent to adding global guidance of the image to each node. Using convolution means that these relationship weights are trainable. This process can be expressed by the following mathematical formula: refers to N nodes with C channels. After that, we use the 1D Conv -Leaky ReLu -1D Conv (CLC) structure to implement reasoning and interaction between N nodes in the graph. The parameters in CLC refer to the adjacency matrix of the weighted oriented complete graph, which store the correlations between the nodes. CLC can learn and reason the complex nonlinear relationship between nodes better than only one 1D Conv. We use the following formula to describe the reasoning process between nodes: where Conv(·) and LRelu(·) denote 1D convolution along node-wise and Leaky ReLU operation respectively. And then we use the bottleneck to achieve channel amplification. The bottleneck receives Y N ∈ R N ×C as input and redistributes these channels by modeling the relationship between the channels of each node, amplifying the number of channel to C × r 2, where r is the upscaling factor. The first convolution in bottleneck makes channel C drop toC = C/α, where α represents reduction ratio. Then the second convolution makes the channel dimensionC grow to C × r 2. The bottleneck not only fits the complex relationships between channels better and redistributes channels more accurately, but also greatly reduces the number of parameters compared to the method of utilizing a single convolution. We use the following formula to describe channel amplification: where Y N C ∈ R r 2 C×N refers to the output tensor. In order to expand the resolution by pixelshuffle like ESPCN , we need to retransform the N nodes (r 2 C × N) which have implemented the relational reasoning into a space whose shape is C × H × W. As above, we still learn a function to get a weight matrix whose shape is N ×HW through a 1 × 1 2D convolution, and then normalize the weight matrix along the column with softmax. Finally, Y rC and W P can be obtained through: where C×H×W is a feature map where each pixel is associated with N nodes. W P ∈ R N ×HW is the normalized weight matrix. The value of these weights ranges from 0 to 1. This means that the reconstruction of each pixel is affected by N nodes to varying degrees. Each pixel in the feature map contains information which is generated by global reasoning. After pixelshuffle, the output is multiplied by a parameter γ 1 and added to the upsampling without global reasoning. The initial value of γ 1 is set to 0. As the global reasoning module trains, the network will gradually learn to assign values to the γ 1, thereby fully exploiting global reasoning. This process can be expressed by the following formula: where H P S (·) denotes the operations of pixel shuffle and H U P (·) denotes the operations of subpixel convolution. Finally, F S can be obtained by: Figure 3: Global reasoning reconstruction block (GRRB) architecture As shown in Figure 3, the specific details are similar to GRUM. We also construct a graph model for reconstruction block. In GRRB, we first obtain the relationship weights W RN ∈ R N ×rHrW between pixels of F S by 2D 1x1 convolution, and then aggregate the regions in F S into N nodes by element-wise product. The output of this process Y RW ∈ R N ×C can be formulated as: After that, we use CLC to achieve the relationship reasoning between nodes. Then we exploit the weight matrix W RP ∈ R N ×rHrW = Sof tmax(V d (Conv(F S))) obtained by learning to redistribute the information of N nodes to the pixels. The output of this process Y RrC ∈ R C×rH×rW can be obtained by: where F CLC refers to the operations of CLC. In addition, we apply the idea of residual connection in GRRB, which multiplys the information generated via global reasoning by a parameter γ and then add it to the input feature map. The output is given by: The initial value of γ is set to 0. As the training progresses, the network assigns more weight to γ. Finally, we input the feature map with global reasoning into the two Convs for reconstruction. We can get the final output through: where H RL (·) denotes the operations of two Convs. In our proposed SRGRN, like the previous method , the number of GRUM depends on the scaling factor. For Conv layers with kernel size 3 × 3, we pad zeros to keep size fixed. We set the reduction ratio in bottleneck as α. The number of nodes in the graph model is set to N. We utilize Leaky ReLu with a negative slope of 0.2 as non-linear activation function. The feature extraction part of the network are the same as the RDN (b) settings. The final Conv layer has 1 or 3 output channels, as we output gray or color HR images. 4.1 SETTINGS Datasets and Metrics. We train all our models using 800 training images in the DIV2K (Agustsson & Timofte., 2017) dataset, which contains high-quality 2K images that can be used for image superresolution task. And We use five standard benchmark datasets to evaluate PSNR and SSIM metrics: Set5 , Set14 , B100 , Urban100 and Manga109 (Y.). The SR are evaluated on Y channel of transformed YCbCr space. Degradation Models. In order to make a fair comparison with existing models, bicubic dowmsampling(denoted as BI) is regarded as a standard degradation model. We use it to generate LR images with scaling factor ×2, ×3, and ×4 from ground truth HR images. To fully demonstrate the effectiveness of our model, we also use two other degradation models and conduct special experiments for them. Our second model, we defined it as BD, which blurs HR images with a Gaussian kernel of size 7 × 7 and a standard deviation of 1.6, and then downsamples the image with scaling factor ×3. In addition to BI and BD, we also built the DN model, which first performs bicubic downsampling with scaling factor ×3 and then adds Gaussian noise with a noise level of 30. Training Setting. In each training batch, 16 LR RGB patches of size 48 × 48 are extracted as inputs. We perform data enhancement on the training images, which are randomly rotated by 90 •, 180 •, 270 • and flipped horizontally. We use the Adam optimizer to update the parameters of the network with β 1 = 0.9, β 2 = 0.999, and ϵ = 10 −8. For all layers in the network, the initial learning rate is set to 0.0001, and then the learning rate is halved every 200 epochs. We use the Pytorch framework to implement our model with Tesla P100. Global reasoning upsampling module. In order to verify the importance of the GRUM, we remove the GRUM from the network, leaving only the GRRB in the network for relation reasoning. As shown in Table 1, after removing GRUM, the performance of the network drops from 32.45 dB to 32.40 dB. When the Case Index is equal to 1, the corresponding model is the baseline model. We can observe that after GRUM is added to the baseline model, the network performance is improved from 32.31 dB to 32.42 dB. It can be seen that although our baseline model has achieved quite good , GRUM can still improve the performance by relation reasoning in upsample module. This also indicates that relation reasoning can indeed in better prformance. These comparisons fairly demonstrate the effectiveness of the GRUM for SR tasks. Global reasoning reconstruction block. Then, we continue to study the effectiveness of GRRB for the network. After we add the GRRB to the baseline model, GRRB improves the performance of the model from 32.31 dB to 32.40 dB. Furthermore, the model with GRUM has achieved good performance. And it is difficult to obtain further improvements. But when we add the GRRB to it, the network performance shows a significant improvement, and the PSNR value on Set5 increases from 32.42 dB to 32.45 dB. These indicates that it is very essential for our network. Basic parameters. Moreover, we also study the effects of two basic parameters N and α on the performance of the model. As shown in Table 1, we observe that larger N and smaller α would lead to higher performance. Considering that larger N and smaller α will also bring more computation, we set 10 and 8 as the value of N and α respectively. Figure 4. Although our SRGRN has less parameter number than that of EDSR, MDSR and D-DBPN, our SRGRN and SRGRN+ achieve higher performance, having a better tradeoff between model size and performance. This demonstrates our method can well balance the number of parameters and the reconstruction performance. For BI degradation model, we compare our proposed SRGRN and SRGRN+ with other seven stateof-the-art image SR methods in quantitative terms. Following the previous works (; b;), we also introduced a self-ensemble strategy to further improve the performance. We denote the self-ensemble method as SRGRN+. A quantitative for ×2, ×3, and ×4 is shown in Table 2. We compare our models with other state-of-the-art methods on PSNR and SSIM. It can be seen that our proposed SRGRN outperforms other methods on all datasets without adding self-ensemble. After adopting self-ensemble, the performance further improves on the basis of SRGRN, and it achieved the best on all datasets. It is worth mentioning that SRFBN uses DIV2K+Flickr2K as their training set, which employs more training images than us. Previous research has come to a that more data in training set leads to a better . However, their are still not comparable to ours. Although RDN (b) is a state-of-the-art method, our SRGRN can achieve better performance in all datasets through relational reasoning in upsampling and reconstruction parts. The quantitative indicate that our GRUM and GRRB play a vital role in improving network performance. (Dong et al.) and VDSR for BD and DN degradation model because of mismatched degradation model. For BD and DN, there is no doubt that reconstruction has become more difficult. As shown in Table 3 and Table 4, in the case of images with a lot of artifacts and noise, our SRGRN can get a excellent performance. This shows that SRGRN can effectively denoise and alleviate blurring artifacts. And when added to self-ensemble, SRGRN+ can achieve a better improvement. To prove that our SRGRN can be widely used in the real world and performs robustly, we also conduct SR experiments on representative real-world images. We reconstruct some low resolution images in the real world that lack a lot of high frequency information. Moreover, in this case, the original HR images are not available and the degradation model is unknown either. Experiments show our SRGRN can recover finer and more faithful real-world images than other state-of-the-art methods under this bad condition. This further reflects the superiority of relation reasoning. In this paper, inspired by the process of reconstructing images from the human visual system, we propose an super-resolution global reasoning network (SRGRN) for image SR, which aims at completing the reconstruction of SR images through global reasoning. We mainly propose global reasoning upsampling module (GRUM) and global reasoning reconstruction block (GRRB) as the core of the network. The GRUM can give the upsampling module the ability to perform relational reasoning in a global scope, which allows this process to overcome the limitations of the receptive field and recover more faithful details by analyzing more contextual information. The GRRB also enables the reconstruction block to make full use of the interaction between the regions and pixels to reconstruct SR images. We exploit SRGRN not only to handle low resolution images that are corrupted by three degradation model, but also to handle real-world images. Extensive benchmark evaluations demonstrate the importance of GRUM and GRRB. It also indicates that our SRGRN achieves superiority over state-of-the-art methods through global reasoning. Visual comparison with BI degradtion model. As shown in Figure 5, we show a visual comparison on 4× SR. For image "img_078" from Urban100, we observe that most methods, even RDN and SRFBN, cannot recover these lattices and suffer from extremely severe blurring artifacts. Only our SRGRN can alleviate these blurring artifacts, recovers sharper and clearer edges and finer texture. For image "MukoukizuNoChonbo" from Manga109, There are heavy blurrings artifacts in all comparison methods, and the outline of some letters are broken. However, our proposed SRGRN can accurately recover these outlines, more faithful to the ground truth. The above comparison are mainly due to the fact that SRGRN can enable upsampling and reconstruction modules to utilize more contextual information through relation reasoning. Visual comparison with BD and DN degradtion model. In Figure 6, we show the comparison of SRGRN with other models in visual . For image "img_014", we use bicubic upsampling to recover these images whose HR images are blurred with a Gaussian kernel before bicubic downsampling, then we obtain SR images with a lot of noticeable blurring artifacts. We have also observed that most methods, including RDN and SRFBN, do not clearly recover the lines around the window. Only our SRGRN can suppress blurring artifacts and recover these clear enough lines close to the ground truth by relation reasoning. For image "img_002", a large amount of noise corrupt the LR image and make it loss some detail. It can be seen that when using bicubic for upsampling, the obtained image not only has a large number of blurring artifacts but also a large amount of noise. However, we find that our SRGRN has great potential for removing noise efficiently and recover more detail. This fully demonstrates the effectiveness and robustness of our SRGRN for BD and DN degradation models. Visual comparison on Real-World Images. In figure 7, the resolution of these images is so small that there is a lot of high frequency information missing from them. Moreover, in this case, the original HR images are not available and the degradation model is unknown either. For image "window"(with 200 × 160 pixels), only our SRGRN is able to recover sharper window edges and produce clearer SR image. For image "flower"(with 256 × 200 pixels), most other methods recover images whose upper left corner produces the edge of the pistil that looks unreal. And their edges of the petals in the whole image are very blurry. Our SRGRN can recovers sharper edges and finer details than other state-of-the-art methods. The above analysis indicate our model perform robustly unknown degradation models. This further reflects the superiority of relation reasoning. | A state-of-the-art model based on global reasoning for image super-resolution | 699 | scitldr |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.